00:00:00.001 Started by upstream project "autotest-per-patch" build number 132727 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.015 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.015 The recommended git tool is: git 00:00:00.016 using credential 00000000-0000-0000-0000-000000000002 00:00:00.017 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.032 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.051 Using shallow fetch with depth 1 00:00:00.051 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.051 > git --version # timeout=10 00:00:00.082 > git --version # 'git version 2.39.2' 00:00:00.082 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.138 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.138 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.295 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.307 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.319 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.319 > git config core.sparsecheckout # timeout=10 00:00:02.331 > git read-tree -mu HEAD # timeout=10 00:00:02.346 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.375 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.375 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.496 [Pipeline] Start of Pipeline 00:00:02.513 [Pipeline] library 00:00:02.514 Loading library shm_lib@master 00:00:02.515 Library shm_lib@master is cached. Copying from home. 00:00:02.531 [Pipeline] node 00:00:02.540 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:02.541 [Pipeline] { 00:00:02.552 [Pipeline] catchError 00:00:02.553 [Pipeline] { 00:00:02.566 [Pipeline] wrap 00:00:02.574 [Pipeline] { 00:00:02.580 [Pipeline] stage 00:00:02.581 [Pipeline] { (Prologue) 00:00:02.595 [Pipeline] echo 00:00:02.596 Node: VM-host-SM17 00:00:02.601 [Pipeline] cleanWs 00:00:02.608 [WS-CLEANUP] Deleting project workspace... 00:00:02.608 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.615 [WS-CLEANUP] done 00:00:02.791 [Pipeline] setCustomBuildProperty 00:00:02.877 [Pipeline] httpRequest 00:00:03.291 [Pipeline] echo 00:00:03.293 Sorcerer 10.211.164.101 is alive 00:00:03.303 [Pipeline] retry 00:00:03.305 [Pipeline] { 00:00:03.318 [Pipeline] httpRequest 00:00:03.322 HttpMethod: GET 00:00:03.323 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.323 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.330 Response Code: HTTP/1.1 200 OK 00:00:03.331 Success: Status code 200 is in the accepted range: 200,404 00:00:03.331 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.823 [Pipeline] } 00:00:21.835 [Pipeline] // retry 00:00:21.839 [Pipeline] sh 00:00:22.119 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.135 [Pipeline] httpRequest 00:00:23.027 [Pipeline] echo 00:00:23.029 Sorcerer 10.211.164.101 is alive 00:00:23.038 [Pipeline] retry 00:00:23.040 [Pipeline] { 00:00:23.054 [Pipeline] httpRequest 00:00:23.060 HttpMethod: GET 00:00:23.061 URL: http://10.211.164.101/packages/spdk_e9db163741a52a58a0d826ae1adef2e09f0f349d.tar.gz 00:00:23.061 Sending request to url: http://10.211.164.101/packages/spdk_e9db163741a52a58a0d826ae1adef2e09f0f349d.tar.gz 00:00:23.070 Response Code: HTTP/1.1 200 OK 00:00:23.070 Success: Status code 200 is in the accepted range: 200,404 00:00:23.071 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_e9db163741a52a58a0d826ae1adef2e09f0f349d.tar.gz 00:05:26.377 [Pipeline] } 00:05:26.394 [Pipeline] // retry 00:05:26.404 [Pipeline] sh 00:05:26.690 + tar --no-same-owner -xf spdk_e9db163741a52a58a0d826ae1adef2e09f0f349d.tar.gz 00:05:30.024 [Pipeline] sh 00:05:30.302 + git -C spdk log --oneline -n5 00:05:30.302 e9db16374 nvme: add spdk_nvme_poll_group_get_fd_group() 00:05:30.302 cf089b398 thread: fd_group-based interrupts 00:05:30.302 8a4656bc1 thread: move interrupt allocation to a function 00:05:30.302 09908f908 util: add method for setting fd_group's wrapper 00:05:30.302 697130caf util: multi-level fd_group nesting 00:05:30.320 [Pipeline] writeFile 00:05:30.338 [Pipeline] sh 00:05:30.621 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:30.632 [Pipeline] sh 00:05:30.912 + cat autorun-spdk.conf 00:05:30.912 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:30.912 SPDK_RUN_ASAN=1 00:05:30.912 SPDK_RUN_UBSAN=1 00:05:30.912 SPDK_TEST_RAID=1 00:05:30.912 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:30.919 RUN_NIGHTLY=0 00:05:30.920 [Pipeline] } 00:05:30.934 [Pipeline] // stage 00:05:30.951 [Pipeline] stage 00:05:30.953 [Pipeline] { (Run VM) 00:05:30.968 [Pipeline] sh 00:05:31.252 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:31.252 + echo 'Start stage prepare_nvme.sh' 00:05:31.252 Start stage prepare_nvme.sh 00:05:31.252 + [[ -n 5 ]] 00:05:31.252 + disk_prefix=ex5 00:05:31.252 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:05:31.252 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:05:31.252 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:05:31.252 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:31.253 ++ SPDK_RUN_ASAN=1 00:05:31.253 ++ SPDK_RUN_UBSAN=1 00:05:31.253 ++ SPDK_TEST_RAID=1 00:05:31.253 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:31.253 ++ RUN_NIGHTLY=0 00:05:31.253 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:05:31.253 + nvme_files=() 00:05:31.253 + declare -A nvme_files 00:05:31.253 + backend_dir=/var/lib/libvirt/images/backends 00:05:31.253 + nvme_files['nvme.img']=5G 00:05:31.253 + nvme_files['nvme-cmb.img']=5G 00:05:31.253 + nvme_files['nvme-multi0.img']=4G 00:05:31.253 + nvme_files['nvme-multi1.img']=4G 00:05:31.253 + nvme_files['nvme-multi2.img']=4G 00:05:31.253 + nvme_files['nvme-openstack.img']=8G 00:05:31.253 + nvme_files['nvme-zns.img']=5G 00:05:31.253 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:31.253 + (( SPDK_TEST_FTL == 1 )) 00:05:31.253 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:31.253 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:31.253 + for nvme in "${!nvme_files[@]}" 00:05:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:05:31.253 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:31.253 + for nvme in "${!nvme_files[@]}" 00:05:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:05:31.253 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:31.253 + for nvme in "${!nvme_files[@]}" 00:05:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:05:31.253 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:31.253 + for nvme in "${!nvme_files[@]}" 00:05:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:05:31.253 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:31.253 + for nvme in "${!nvme_files[@]}" 00:05:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:05:31.253 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:31.253 + for nvme in "${!nvme_files[@]}" 00:05:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:05:31.253 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:31.253 + for nvme in "${!nvme_files[@]}" 00:05:31.253 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:05:31.540 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:31.540 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:05:31.540 + echo 'End stage prepare_nvme.sh' 00:05:31.540 End stage prepare_nvme.sh 00:05:31.552 [Pipeline] sh 00:05:31.836 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:31.836 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:05:31.836 00:05:31.836 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:05:31.836 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:05:31.836 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:05:31.836 HELP=0 00:05:31.836 DRY_RUN=0 00:05:31.836 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:05:31.836 NVME_DISKS_TYPE=nvme,nvme, 00:05:31.836 NVME_AUTO_CREATE=0 00:05:31.836 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:05:31.836 NVME_CMB=,, 00:05:31.836 NVME_PMR=,, 00:05:31.836 NVME_ZNS=,, 00:05:31.836 NVME_MS=,, 00:05:31.836 NVME_FDP=,, 00:05:31.836 SPDK_VAGRANT_DISTRO=fedora39 00:05:31.836 SPDK_VAGRANT_VMCPU=10 00:05:31.836 SPDK_VAGRANT_VMRAM=12288 00:05:31.836 SPDK_VAGRANT_PROVIDER=libvirt 00:05:31.836 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:31.836 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:31.836 SPDK_OPENSTACK_NETWORK=0 00:05:31.836 VAGRANT_PACKAGE_BOX=0 00:05:31.836 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:05:31.836 FORCE_DISTRO=true 00:05:31.836 VAGRANT_BOX_VERSION= 00:05:31.836 EXTRA_VAGRANTFILES= 00:05:31.836 NIC_MODEL=e1000 00:05:31.836 00:05:31.836 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:05:31.836 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:05:35.117 Bringing machine 'default' up with 'libvirt' provider... 00:05:36.050 ==> default: Creating image (snapshot of base box volume). 00:05:36.308 ==> default: Creating domain with the following settings... 00:05:36.308 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733490022_e17a9cc2ba564436e9f9 00:05:36.308 ==> default: -- Domain type: kvm 00:05:36.308 ==> default: -- Cpus: 10 00:05:36.308 ==> default: -- Feature: acpi 00:05:36.308 ==> default: -- Feature: apic 00:05:36.308 ==> default: -- Feature: pae 00:05:36.308 ==> default: -- Memory: 12288M 00:05:36.308 ==> default: -- Memory Backing: hugepages: 00:05:36.308 ==> default: -- Management MAC: 00:05:36.308 ==> default: -- Loader: 00:05:36.308 ==> default: -- Nvram: 00:05:36.308 ==> default: -- Base box: spdk/fedora39 00:05:36.308 ==> default: -- Storage pool: default 00:05:36.308 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733490022_e17a9cc2ba564436e9f9.img (20G) 00:05:36.308 ==> default: -- Volume Cache: default 00:05:36.308 ==> default: -- Kernel: 00:05:36.308 ==> default: -- Initrd: 00:05:36.308 ==> default: -- Graphics Type: vnc 00:05:36.308 ==> default: -- Graphics Port: -1 00:05:36.308 ==> default: -- Graphics IP: 127.0.0.1 00:05:36.308 ==> default: -- Graphics Password: Not defined 00:05:36.308 ==> default: -- Video Type: cirrus 00:05:36.308 ==> default: -- Video VRAM: 9216 00:05:36.308 ==> default: -- Sound Type: 00:05:36.308 ==> default: -- Keymap: en-us 00:05:36.308 ==> default: -- TPM Path: 00:05:36.308 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:36.308 ==> default: -- Command line args: 00:05:36.308 ==> default: -> value=-device, 00:05:36.308 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:36.308 ==> default: -> value=-drive, 00:05:36.308 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:05:36.308 ==> default: -> value=-device, 00:05:36.308 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:36.308 ==> default: -> value=-device, 00:05:36.308 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:36.308 ==> default: -> value=-drive, 00:05:36.308 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:36.308 ==> default: -> value=-device, 00:05:36.308 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:36.308 ==> default: -> value=-drive, 00:05:36.308 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:36.308 ==> default: -> value=-device, 00:05:36.308 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:36.308 ==> default: -> value=-drive, 00:05:36.308 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:36.308 ==> default: -> value=-device, 00:05:36.308 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:36.308 ==> default: Creating shared folders metadata... 00:05:36.308 ==> default: Starting domain. 00:05:38.233 ==> default: Waiting for domain to get an IP address... 00:06:00.161 ==> default: Waiting for SSH to become available... 00:06:00.728 ==> default: Configuring and enabling network interfaces... 00:06:04.972 default: SSH address: 192.168.121.34:22 00:06:04.972 default: SSH username: vagrant 00:06:04.972 default: SSH auth method: private key 00:06:07.515 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:15.689 ==> default: Mounting SSHFS shared folder... 00:06:17.590 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:17.590 ==> default: Checking Mount.. 00:06:18.524 ==> default: Folder Successfully Mounted! 00:06:18.524 ==> default: Running provisioner: file... 00:06:19.459 default: ~/.gitconfig => .gitconfig 00:06:19.789 00:06:19.789 SUCCESS! 00:06:19.789 00:06:19.789 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:06:19.789 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:19.789 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:06:19.789 00:06:19.792 [Pipeline] } 00:06:19.808 [Pipeline] // stage 00:06:19.817 [Pipeline] dir 00:06:19.818 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:06:19.819 [Pipeline] { 00:06:19.830 [Pipeline] catchError 00:06:19.831 [Pipeline] { 00:06:19.842 [Pipeline] sh 00:06:20.119 + vagrant ssh-config --host vagrant 00:06:20.119 + sed -ne /^Host/,$p 00:06:20.119 + tee ssh_conf 00:06:24.300 Host vagrant 00:06:24.300 HostName 192.168.121.34 00:06:24.300 User vagrant 00:06:24.300 Port 22 00:06:24.300 UserKnownHostsFile /dev/null 00:06:24.300 StrictHostKeyChecking no 00:06:24.300 PasswordAuthentication no 00:06:24.300 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:24.300 IdentitiesOnly yes 00:06:24.300 LogLevel FATAL 00:06:24.300 ForwardAgent yes 00:06:24.300 ForwardX11 yes 00:06:24.300 00:06:24.313 [Pipeline] withEnv 00:06:24.315 [Pipeline] { 00:06:24.332 [Pipeline] sh 00:06:24.614 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:24.614 source /etc/os-release 00:06:24.614 [[ -e /image.version ]] && img=$(< /image.version) 00:06:24.614 # Minimal, systemd-like check. 00:06:24.614 if [[ -e /.dockerenv ]]; then 00:06:24.614 # Clear garbage from the node's name: 00:06:24.614 # agt-er_autotest_547-896 -> autotest_547-896 00:06:24.614 # $HOSTNAME is the actual container id 00:06:24.614 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:24.614 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:24.614 # We can assume this is a mount from a host where container is running, 00:06:24.614 # so fetch its hostname to easily identify the target swarm worker. 00:06:24.614 container="$(< /etc/hostname) ($agent)" 00:06:24.614 else 00:06:24.614 # Fallback 00:06:24.614 container=$agent 00:06:24.614 fi 00:06:24.614 fi 00:06:24.614 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:24.614 00:06:24.624 [Pipeline] } 00:06:24.641 [Pipeline] // withEnv 00:06:24.651 [Pipeline] setCustomBuildProperty 00:06:24.668 [Pipeline] stage 00:06:24.670 [Pipeline] { (Tests) 00:06:24.688 [Pipeline] sh 00:06:24.967 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:25.237 [Pipeline] sh 00:06:25.514 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:25.787 [Pipeline] timeout 00:06:25.787 Timeout set to expire in 1 hr 30 min 00:06:25.789 [Pipeline] { 00:06:25.803 [Pipeline] sh 00:06:26.084 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:26.659 HEAD is now at e9db16374 nvme: add spdk_nvme_poll_group_get_fd_group() 00:06:26.670 [Pipeline] sh 00:06:26.947 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:27.218 [Pipeline] sh 00:06:27.494 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:27.766 [Pipeline] sh 00:06:28.043 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:06:28.301 ++ readlink -f spdk_repo 00:06:28.301 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:28.301 + [[ -n /home/vagrant/spdk_repo ]] 00:06:28.301 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:28.301 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:28.301 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:28.301 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:28.301 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:28.301 + [[ raid-vg-autotest == pkgdep-* ]] 00:06:28.301 + cd /home/vagrant/spdk_repo 00:06:28.301 + source /etc/os-release 00:06:28.301 ++ NAME='Fedora Linux' 00:06:28.301 ++ VERSION='39 (Cloud Edition)' 00:06:28.301 ++ ID=fedora 00:06:28.301 ++ VERSION_ID=39 00:06:28.301 ++ VERSION_CODENAME= 00:06:28.301 ++ PLATFORM_ID=platform:f39 00:06:28.301 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:28.301 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:28.301 ++ LOGO=fedora-logo-icon 00:06:28.301 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:28.301 ++ HOME_URL=https://fedoraproject.org/ 00:06:28.301 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:28.301 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:28.301 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:28.301 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:28.301 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:28.301 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:28.301 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:28.301 ++ SUPPORT_END=2024-11-12 00:06:28.301 ++ VARIANT='Cloud Edition' 00:06:28.301 ++ VARIANT_ID=cloud 00:06:28.301 + uname -a 00:06:28.301 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:28.301 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:28.558 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:28.558 Hugepages 00:06:28.558 node hugesize free / total 00:06:28.558 node0 1048576kB 0 / 0 00:06:28.558 node0 2048kB 0 / 0 00:06:28.558 00:06:28.558 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:28.815 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:28.815 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:28.815 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:28.815 + rm -f /tmp/spdk-ld-path 00:06:28.815 + source autorun-spdk.conf 00:06:28.815 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:28.815 ++ SPDK_RUN_ASAN=1 00:06:28.815 ++ SPDK_RUN_UBSAN=1 00:06:28.815 ++ SPDK_TEST_RAID=1 00:06:28.815 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:28.815 ++ RUN_NIGHTLY=0 00:06:28.815 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:28.815 + [[ -n '' ]] 00:06:28.815 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:28.815 + for M in /var/spdk/build-*-manifest.txt 00:06:28.815 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:28.815 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:28.815 + for M in /var/spdk/build-*-manifest.txt 00:06:28.815 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:28.815 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:28.815 + for M in /var/spdk/build-*-manifest.txt 00:06:28.815 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:28.815 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:28.815 ++ uname 00:06:28.815 + [[ Linux == \L\i\n\u\x ]] 00:06:28.815 + sudo dmesg -T 00:06:28.815 + sudo dmesg --clear 00:06:28.815 + dmesg_pid=5212 00:06:28.815 + [[ Fedora Linux == FreeBSD ]] 00:06:28.815 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:28.815 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:28.815 + sudo dmesg -Tw 00:06:28.815 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:28.815 + [[ -x /usr/src/fio-static/fio ]] 00:06:28.815 + export FIO_BIN=/usr/src/fio-static/fio 00:06:28.815 + FIO_BIN=/usr/src/fio-static/fio 00:06:28.815 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:28.815 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:28.815 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:28.815 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:28.815 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:28.815 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:28.815 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:28.815 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:28.815 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:28.815 13:01:15 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:28.815 13:01:15 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:28.815 13:01:15 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:28.815 13:01:15 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:06:28.815 13:01:15 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:06:28.815 13:01:15 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:06:28.815 13:01:15 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:28.815 13:01:15 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:06:28.815 13:01:15 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:28.815 13:01:15 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:29.073 13:01:15 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:29.073 13:01:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:29.073 13:01:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:29.073 13:01:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:29.073 13:01:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.073 13:01:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.073 13:01:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.073 13:01:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.073 13:01:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.073 13:01:15 -- paths/export.sh@5 -- $ export PATH 00:06:29.074 13:01:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.074 13:01:15 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:29.074 13:01:15 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:29.074 13:01:15 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733490075.XXXXXX 00:06:29.074 13:01:15 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733490075.gIza3W 00:06:29.074 13:01:15 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:29.074 13:01:15 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:29.074 13:01:15 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:29.074 13:01:15 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:29.074 13:01:15 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:29.074 13:01:15 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:29.074 13:01:15 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:29.074 13:01:15 -- common/autotest_common.sh@10 -- $ set +x 00:06:29.074 13:01:15 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:06:29.074 13:01:15 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:29.074 13:01:15 -- pm/common@17 -- $ local monitor 00:06:29.074 13:01:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:29.074 13:01:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:29.074 13:01:15 -- pm/common@25 -- $ sleep 1 00:06:29.074 13:01:15 -- pm/common@21 -- $ date +%s 00:06:29.074 13:01:15 -- pm/common@21 -- $ date +%s 00:06:29.074 13:01:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733490075 00:06:29.074 13:01:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733490075 00:06:29.074 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733490075_collect-vmstat.pm.log 00:06:29.074 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733490075_collect-cpu-load.pm.log 00:06:30.010 13:01:16 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:30.010 13:01:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:30.010 13:01:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:30.010 13:01:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:30.010 13:01:16 -- spdk/autobuild.sh@16 -- $ date -u 00:06:30.010 Fri Dec 6 01:01:16 PM UTC 2024 00:06:30.010 13:01:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:30.010 v25.01-pre-309-ge9db16374 00:06:30.010 13:01:16 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:30.010 13:01:16 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:30.010 13:01:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:30.010 13:01:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:30.010 13:01:16 -- common/autotest_common.sh@10 -- $ set +x 00:06:30.010 ************************************ 00:06:30.010 START TEST asan 00:06:30.010 ************************************ 00:06:30.010 using asan 00:06:30.010 13:01:16 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:06:30.010 00:06:30.010 real 0m0.000s 00:06:30.010 user 0m0.000s 00:06:30.010 sys 0m0.000s 00:06:30.010 13:01:16 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:30.010 13:01:16 asan -- common/autotest_common.sh@10 -- $ set +x 00:06:30.010 ************************************ 00:06:30.010 END TEST asan 00:06:30.010 ************************************ 00:06:30.010 13:01:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:30.010 13:01:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:30.010 13:01:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:30.010 13:01:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:30.010 13:01:16 -- common/autotest_common.sh@10 -- $ set +x 00:06:30.010 ************************************ 00:06:30.010 START TEST ubsan 00:06:30.010 ************************************ 00:06:30.010 using ubsan 00:06:30.010 13:01:16 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:30.010 00:06:30.010 real 0m0.000s 00:06:30.010 user 0m0.000s 00:06:30.010 sys 0m0.000s 00:06:30.010 13:01:16 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:30.010 ************************************ 00:06:30.010 13:01:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:30.010 END TEST ubsan 00:06:30.010 ************************************ 00:06:30.010 13:01:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:30.010 13:01:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:30.010 13:01:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:30.010 13:01:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:30.010 13:01:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:30.010 13:01:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:30.010 13:01:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:30.010 13:01:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:30.010 13:01:17 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:06:30.268 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:30.268 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:30.832 Using 'verbs' RDMA provider 00:06:44.019 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:58.921 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:58.921 Creating mk/config.mk...done. 00:06:58.921 Creating mk/cc.flags.mk...done. 00:06:58.921 Type 'make' to build. 00:06:58.921 13:01:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:58.921 13:01:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:58.921 13:01:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:58.921 13:01:44 -- common/autotest_common.sh@10 -- $ set +x 00:06:58.921 ************************************ 00:06:58.921 START TEST make 00:06:58.921 ************************************ 00:06:58.921 13:01:44 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:58.921 make[1]: Nothing to be done for 'all'. 00:07:11.156 The Meson build system 00:07:11.156 Version: 1.5.0 00:07:11.156 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:11.156 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:11.156 Build type: native build 00:07:11.156 Program cat found: YES (/usr/bin/cat) 00:07:11.156 Project name: DPDK 00:07:11.156 Project version: 24.03.0 00:07:11.156 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:11.156 C linker for the host machine: cc ld.bfd 2.40-14 00:07:11.156 Host machine cpu family: x86_64 00:07:11.156 Host machine cpu: x86_64 00:07:11.156 Message: ## Building in Developer Mode ## 00:07:11.156 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:11.156 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:11.156 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:11.156 Program python3 found: YES (/usr/bin/python3) 00:07:11.156 Program cat found: YES (/usr/bin/cat) 00:07:11.156 Compiler for C supports arguments -march=native: YES 00:07:11.156 Checking for size of "void *" : 8 00:07:11.156 Checking for size of "void *" : 8 (cached) 00:07:11.156 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:11.156 Library m found: YES 00:07:11.156 Library numa found: YES 00:07:11.156 Has header "numaif.h" : YES 00:07:11.156 Library fdt found: NO 00:07:11.156 Library execinfo found: NO 00:07:11.156 Has header "execinfo.h" : YES 00:07:11.156 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:11.156 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:11.156 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:11.156 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:11.156 Run-time dependency openssl found: YES 3.1.1 00:07:11.156 Run-time dependency libpcap found: YES 1.10.4 00:07:11.156 Has header "pcap.h" with dependency libpcap: YES 00:07:11.156 Compiler for C supports arguments -Wcast-qual: YES 00:07:11.156 Compiler for C supports arguments -Wdeprecated: YES 00:07:11.156 Compiler for C supports arguments -Wformat: YES 00:07:11.156 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:11.156 Compiler for C supports arguments -Wformat-security: NO 00:07:11.156 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:11.156 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:11.156 Compiler for C supports arguments -Wnested-externs: YES 00:07:11.156 Compiler for C supports arguments -Wold-style-definition: YES 00:07:11.156 Compiler for C supports arguments -Wpointer-arith: YES 00:07:11.156 Compiler for C supports arguments -Wsign-compare: YES 00:07:11.156 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:11.156 Compiler for C supports arguments -Wundef: YES 00:07:11.156 Compiler for C supports arguments -Wwrite-strings: YES 00:07:11.156 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:11.156 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:11.156 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:11.156 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:11.156 Program objdump found: YES (/usr/bin/objdump) 00:07:11.156 Compiler for C supports arguments -mavx512f: YES 00:07:11.156 Checking if "AVX512 checking" compiles: YES 00:07:11.156 Fetching value of define "__SSE4_2__" : 1 00:07:11.156 Fetching value of define "__AES__" : 1 00:07:11.156 Fetching value of define "__AVX__" : 1 00:07:11.156 Fetching value of define "__AVX2__" : 1 00:07:11.156 Fetching value of define "__AVX512BW__" : (undefined) 00:07:11.156 Fetching value of define "__AVX512CD__" : (undefined) 00:07:11.156 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:11.156 Fetching value of define "__AVX512F__" : (undefined) 00:07:11.156 Fetching value of define "__AVX512VL__" : (undefined) 00:07:11.156 Fetching value of define "__PCLMUL__" : 1 00:07:11.156 Fetching value of define "__RDRND__" : 1 00:07:11.156 Fetching value of define "__RDSEED__" : 1 00:07:11.156 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:11.156 Fetching value of define "__znver1__" : (undefined) 00:07:11.156 Fetching value of define "__znver2__" : (undefined) 00:07:11.156 Fetching value of define "__znver3__" : (undefined) 00:07:11.156 Fetching value of define "__znver4__" : (undefined) 00:07:11.156 Library asan found: YES 00:07:11.156 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:11.156 Message: lib/log: Defining dependency "log" 00:07:11.156 Message: lib/kvargs: Defining dependency "kvargs" 00:07:11.156 Message: lib/telemetry: Defining dependency "telemetry" 00:07:11.156 Library rt found: YES 00:07:11.156 Checking for function "getentropy" : NO 00:07:11.156 Message: lib/eal: Defining dependency "eal" 00:07:11.156 Message: lib/ring: Defining dependency "ring" 00:07:11.156 Message: lib/rcu: Defining dependency "rcu" 00:07:11.156 Message: lib/mempool: Defining dependency "mempool" 00:07:11.156 Message: lib/mbuf: Defining dependency "mbuf" 00:07:11.156 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:11.156 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:11.156 Compiler for C supports arguments -mpclmul: YES 00:07:11.156 Compiler for C supports arguments -maes: YES 00:07:11.156 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:11.156 Compiler for C supports arguments -mavx512bw: YES 00:07:11.156 Compiler for C supports arguments -mavx512dq: YES 00:07:11.156 Compiler for C supports arguments -mavx512vl: YES 00:07:11.156 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:11.156 Compiler for C supports arguments -mavx2: YES 00:07:11.156 Compiler for C supports arguments -mavx: YES 00:07:11.156 Message: lib/net: Defining dependency "net" 00:07:11.156 Message: lib/meter: Defining dependency "meter" 00:07:11.156 Message: lib/ethdev: Defining dependency "ethdev" 00:07:11.156 Message: lib/pci: Defining dependency "pci" 00:07:11.156 Message: lib/cmdline: Defining dependency "cmdline" 00:07:11.156 Message: lib/hash: Defining dependency "hash" 00:07:11.156 Message: lib/timer: Defining dependency "timer" 00:07:11.156 Message: lib/compressdev: Defining dependency "compressdev" 00:07:11.156 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:11.156 Message: lib/dmadev: Defining dependency "dmadev" 00:07:11.156 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:11.156 Message: lib/power: Defining dependency "power" 00:07:11.156 Message: lib/reorder: Defining dependency "reorder" 00:07:11.156 Message: lib/security: Defining dependency "security" 00:07:11.156 Has header "linux/userfaultfd.h" : YES 00:07:11.156 Has header "linux/vduse.h" : YES 00:07:11.156 Message: lib/vhost: Defining dependency "vhost" 00:07:11.156 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:11.156 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:11.156 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:11.156 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:11.156 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:11.156 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:11.156 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:11.156 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:11.156 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:11.156 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:11.156 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:11.156 Configuring doxy-api-html.conf using configuration 00:07:11.156 Configuring doxy-api-man.conf using configuration 00:07:11.156 Program mandb found: YES (/usr/bin/mandb) 00:07:11.156 Program sphinx-build found: NO 00:07:11.156 Configuring rte_build_config.h using configuration 00:07:11.156 Message: 00:07:11.156 ================= 00:07:11.156 Applications Enabled 00:07:11.156 ================= 00:07:11.156 00:07:11.156 apps: 00:07:11.156 00:07:11.156 00:07:11.156 Message: 00:07:11.156 ================= 00:07:11.156 Libraries Enabled 00:07:11.156 ================= 00:07:11.156 00:07:11.156 libs: 00:07:11.156 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:11.156 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:11.156 cryptodev, dmadev, power, reorder, security, vhost, 00:07:11.156 00:07:11.156 Message: 00:07:11.156 =============== 00:07:11.156 Drivers Enabled 00:07:11.156 =============== 00:07:11.156 00:07:11.156 common: 00:07:11.156 00:07:11.156 bus: 00:07:11.156 pci, vdev, 00:07:11.156 mempool: 00:07:11.156 ring, 00:07:11.156 dma: 00:07:11.156 00:07:11.156 net: 00:07:11.156 00:07:11.156 crypto: 00:07:11.156 00:07:11.156 compress: 00:07:11.156 00:07:11.156 vdpa: 00:07:11.156 00:07:11.156 00:07:11.156 Message: 00:07:11.156 ================= 00:07:11.156 Content Skipped 00:07:11.156 ================= 00:07:11.156 00:07:11.156 apps: 00:07:11.156 dumpcap: explicitly disabled via build config 00:07:11.156 graph: explicitly disabled via build config 00:07:11.156 pdump: explicitly disabled via build config 00:07:11.156 proc-info: explicitly disabled via build config 00:07:11.156 test-acl: explicitly disabled via build config 00:07:11.156 test-bbdev: explicitly disabled via build config 00:07:11.157 test-cmdline: explicitly disabled via build config 00:07:11.157 test-compress-perf: explicitly disabled via build config 00:07:11.157 test-crypto-perf: explicitly disabled via build config 00:07:11.157 test-dma-perf: explicitly disabled via build config 00:07:11.157 test-eventdev: explicitly disabled via build config 00:07:11.157 test-fib: explicitly disabled via build config 00:07:11.157 test-flow-perf: explicitly disabled via build config 00:07:11.157 test-gpudev: explicitly disabled via build config 00:07:11.157 test-mldev: explicitly disabled via build config 00:07:11.157 test-pipeline: explicitly disabled via build config 00:07:11.157 test-pmd: explicitly disabled via build config 00:07:11.157 test-regex: explicitly disabled via build config 00:07:11.157 test-sad: explicitly disabled via build config 00:07:11.157 test-security-perf: explicitly disabled via build config 00:07:11.157 00:07:11.157 libs: 00:07:11.157 argparse: explicitly disabled via build config 00:07:11.157 metrics: explicitly disabled via build config 00:07:11.157 acl: explicitly disabled via build config 00:07:11.157 bbdev: explicitly disabled via build config 00:07:11.157 bitratestats: explicitly disabled via build config 00:07:11.157 bpf: explicitly disabled via build config 00:07:11.157 cfgfile: explicitly disabled via build config 00:07:11.157 distributor: explicitly disabled via build config 00:07:11.157 efd: explicitly disabled via build config 00:07:11.157 eventdev: explicitly disabled via build config 00:07:11.157 dispatcher: explicitly disabled via build config 00:07:11.157 gpudev: explicitly disabled via build config 00:07:11.157 gro: explicitly disabled via build config 00:07:11.157 gso: explicitly disabled via build config 00:07:11.157 ip_frag: explicitly disabled via build config 00:07:11.157 jobstats: explicitly disabled via build config 00:07:11.157 latencystats: explicitly disabled via build config 00:07:11.157 lpm: explicitly disabled via build config 00:07:11.157 member: explicitly disabled via build config 00:07:11.157 pcapng: explicitly disabled via build config 00:07:11.157 rawdev: explicitly disabled via build config 00:07:11.157 regexdev: explicitly disabled via build config 00:07:11.157 mldev: explicitly disabled via build config 00:07:11.157 rib: explicitly disabled via build config 00:07:11.157 sched: explicitly disabled via build config 00:07:11.157 stack: explicitly disabled via build config 00:07:11.157 ipsec: explicitly disabled via build config 00:07:11.157 pdcp: explicitly disabled via build config 00:07:11.157 fib: explicitly disabled via build config 00:07:11.157 port: explicitly disabled via build config 00:07:11.157 pdump: explicitly disabled via build config 00:07:11.157 table: explicitly disabled via build config 00:07:11.157 pipeline: explicitly disabled via build config 00:07:11.157 graph: explicitly disabled via build config 00:07:11.157 node: explicitly disabled via build config 00:07:11.157 00:07:11.157 drivers: 00:07:11.157 common/cpt: not in enabled drivers build config 00:07:11.157 common/dpaax: not in enabled drivers build config 00:07:11.157 common/iavf: not in enabled drivers build config 00:07:11.157 common/idpf: not in enabled drivers build config 00:07:11.157 common/ionic: not in enabled drivers build config 00:07:11.157 common/mvep: not in enabled drivers build config 00:07:11.157 common/octeontx: not in enabled drivers build config 00:07:11.157 bus/auxiliary: not in enabled drivers build config 00:07:11.157 bus/cdx: not in enabled drivers build config 00:07:11.157 bus/dpaa: not in enabled drivers build config 00:07:11.157 bus/fslmc: not in enabled drivers build config 00:07:11.157 bus/ifpga: not in enabled drivers build config 00:07:11.157 bus/platform: not in enabled drivers build config 00:07:11.157 bus/uacce: not in enabled drivers build config 00:07:11.157 bus/vmbus: not in enabled drivers build config 00:07:11.157 common/cnxk: not in enabled drivers build config 00:07:11.157 common/mlx5: not in enabled drivers build config 00:07:11.157 common/nfp: not in enabled drivers build config 00:07:11.157 common/nitrox: not in enabled drivers build config 00:07:11.157 common/qat: not in enabled drivers build config 00:07:11.157 common/sfc_efx: not in enabled drivers build config 00:07:11.157 mempool/bucket: not in enabled drivers build config 00:07:11.157 mempool/cnxk: not in enabled drivers build config 00:07:11.157 mempool/dpaa: not in enabled drivers build config 00:07:11.157 mempool/dpaa2: not in enabled drivers build config 00:07:11.157 mempool/octeontx: not in enabled drivers build config 00:07:11.157 mempool/stack: not in enabled drivers build config 00:07:11.157 dma/cnxk: not in enabled drivers build config 00:07:11.157 dma/dpaa: not in enabled drivers build config 00:07:11.157 dma/dpaa2: not in enabled drivers build config 00:07:11.157 dma/hisilicon: not in enabled drivers build config 00:07:11.157 dma/idxd: not in enabled drivers build config 00:07:11.157 dma/ioat: not in enabled drivers build config 00:07:11.157 dma/skeleton: not in enabled drivers build config 00:07:11.157 net/af_packet: not in enabled drivers build config 00:07:11.157 net/af_xdp: not in enabled drivers build config 00:07:11.157 net/ark: not in enabled drivers build config 00:07:11.157 net/atlantic: not in enabled drivers build config 00:07:11.157 net/avp: not in enabled drivers build config 00:07:11.157 net/axgbe: not in enabled drivers build config 00:07:11.157 net/bnx2x: not in enabled drivers build config 00:07:11.157 net/bnxt: not in enabled drivers build config 00:07:11.157 net/bonding: not in enabled drivers build config 00:07:11.157 net/cnxk: not in enabled drivers build config 00:07:11.157 net/cpfl: not in enabled drivers build config 00:07:11.157 net/cxgbe: not in enabled drivers build config 00:07:11.157 net/dpaa: not in enabled drivers build config 00:07:11.157 net/dpaa2: not in enabled drivers build config 00:07:11.157 net/e1000: not in enabled drivers build config 00:07:11.157 net/ena: not in enabled drivers build config 00:07:11.157 net/enetc: not in enabled drivers build config 00:07:11.157 net/enetfec: not in enabled drivers build config 00:07:11.157 net/enic: not in enabled drivers build config 00:07:11.157 net/failsafe: not in enabled drivers build config 00:07:11.157 net/fm10k: not in enabled drivers build config 00:07:11.157 net/gve: not in enabled drivers build config 00:07:11.157 net/hinic: not in enabled drivers build config 00:07:11.157 net/hns3: not in enabled drivers build config 00:07:11.157 net/i40e: not in enabled drivers build config 00:07:11.157 net/iavf: not in enabled drivers build config 00:07:11.157 net/ice: not in enabled drivers build config 00:07:11.157 net/idpf: not in enabled drivers build config 00:07:11.157 net/igc: not in enabled drivers build config 00:07:11.157 net/ionic: not in enabled drivers build config 00:07:11.157 net/ipn3ke: not in enabled drivers build config 00:07:11.157 net/ixgbe: not in enabled drivers build config 00:07:11.157 net/mana: not in enabled drivers build config 00:07:11.157 net/memif: not in enabled drivers build config 00:07:11.157 net/mlx4: not in enabled drivers build config 00:07:11.157 net/mlx5: not in enabled drivers build config 00:07:11.157 net/mvneta: not in enabled drivers build config 00:07:11.157 net/mvpp2: not in enabled drivers build config 00:07:11.157 net/netvsc: not in enabled drivers build config 00:07:11.157 net/nfb: not in enabled drivers build config 00:07:11.157 net/nfp: not in enabled drivers build config 00:07:11.157 net/ngbe: not in enabled drivers build config 00:07:11.157 net/null: not in enabled drivers build config 00:07:11.157 net/octeontx: not in enabled drivers build config 00:07:11.157 net/octeon_ep: not in enabled drivers build config 00:07:11.157 net/pcap: not in enabled drivers build config 00:07:11.157 net/pfe: not in enabled drivers build config 00:07:11.157 net/qede: not in enabled drivers build config 00:07:11.157 net/ring: not in enabled drivers build config 00:07:11.157 net/sfc: not in enabled drivers build config 00:07:11.157 net/softnic: not in enabled drivers build config 00:07:11.157 net/tap: not in enabled drivers build config 00:07:11.157 net/thunderx: not in enabled drivers build config 00:07:11.157 net/txgbe: not in enabled drivers build config 00:07:11.157 net/vdev_netvsc: not in enabled drivers build config 00:07:11.157 net/vhost: not in enabled drivers build config 00:07:11.157 net/virtio: not in enabled drivers build config 00:07:11.157 net/vmxnet3: not in enabled drivers build config 00:07:11.157 raw/*: missing internal dependency, "rawdev" 00:07:11.157 crypto/armv8: not in enabled drivers build config 00:07:11.157 crypto/bcmfs: not in enabled drivers build config 00:07:11.157 crypto/caam_jr: not in enabled drivers build config 00:07:11.157 crypto/ccp: not in enabled drivers build config 00:07:11.157 crypto/cnxk: not in enabled drivers build config 00:07:11.157 crypto/dpaa_sec: not in enabled drivers build config 00:07:11.157 crypto/dpaa2_sec: not in enabled drivers build config 00:07:11.157 crypto/ipsec_mb: not in enabled drivers build config 00:07:11.157 crypto/mlx5: not in enabled drivers build config 00:07:11.157 crypto/mvsam: not in enabled drivers build config 00:07:11.157 crypto/nitrox: not in enabled drivers build config 00:07:11.157 crypto/null: not in enabled drivers build config 00:07:11.157 crypto/octeontx: not in enabled drivers build config 00:07:11.157 crypto/openssl: not in enabled drivers build config 00:07:11.157 crypto/scheduler: not in enabled drivers build config 00:07:11.157 crypto/uadk: not in enabled drivers build config 00:07:11.157 crypto/virtio: not in enabled drivers build config 00:07:11.157 compress/isal: not in enabled drivers build config 00:07:11.157 compress/mlx5: not in enabled drivers build config 00:07:11.157 compress/nitrox: not in enabled drivers build config 00:07:11.157 compress/octeontx: not in enabled drivers build config 00:07:11.157 compress/zlib: not in enabled drivers build config 00:07:11.157 regex/*: missing internal dependency, "regexdev" 00:07:11.157 ml/*: missing internal dependency, "mldev" 00:07:11.157 vdpa/ifc: not in enabled drivers build config 00:07:11.157 vdpa/mlx5: not in enabled drivers build config 00:07:11.157 vdpa/nfp: not in enabled drivers build config 00:07:11.157 vdpa/sfc: not in enabled drivers build config 00:07:11.157 event/*: missing internal dependency, "eventdev" 00:07:11.157 baseband/*: missing internal dependency, "bbdev" 00:07:11.157 gpu/*: missing internal dependency, "gpudev" 00:07:11.157 00:07:11.157 00:07:11.416 Build targets in project: 85 00:07:11.416 00:07:11.416 DPDK 24.03.0 00:07:11.416 00:07:11.416 User defined options 00:07:11.416 buildtype : debug 00:07:11.416 default_library : shared 00:07:11.416 libdir : lib 00:07:11.416 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:11.416 b_sanitize : address 00:07:11.416 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:11.416 c_link_args : 00:07:11.416 cpu_instruction_set: native 00:07:11.416 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:11.416 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:11.416 enable_docs : false 00:07:11.416 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:11.416 enable_kmods : false 00:07:11.416 max_lcores : 128 00:07:11.416 tests : false 00:07:11.416 00:07:11.416 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:11.983 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:12.241 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:12.241 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:12.241 [3/268] Linking static target lib/librte_kvargs.a 00:07:12.241 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:12.499 [5/268] Linking static target lib/librte_log.a 00:07:12.499 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:12.757 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.757 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:13.016 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:13.016 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:13.016 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:13.016 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:13.016 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:13.274 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:13.274 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:13.274 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:13.274 [17/268] Linking static target lib/librte_telemetry.a 00:07:13.532 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.532 [19/268] Linking target lib/librte_log.so.24.1 00:07:13.791 [20/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:13.791 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:13.791 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:14.050 [23/268] Linking target lib/librte_kvargs.so.24.1 00:07:14.050 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:14.308 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:14.308 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:14.308 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:14.308 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:14.308 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.308 [30/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:14.567 [31/268] Linking target lib/librte_telemetry.so.24.1 00:07:14.567 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:14.567 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:14.567 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:14.567 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:14.825 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:15.084 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:15.084 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:15.084 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:15.084 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:15.084 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:15.084 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:15.084 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:15.342 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:15.601 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:15.859 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:15.859 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:16.117 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:16.117 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:16.376 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:16.376 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:16.376 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:16.634 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:16.893 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:16.893 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:16.893 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:17.151 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:17.416 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:17.673 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:17.673 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:17.673 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:17.930 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:17.930 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:17.930 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:18.187 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:18.444 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:18.444 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:19.008 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:19.008 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:19.008 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:19.008 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:19.264 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:19.264 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:19.522 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:19.522 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:19.780 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:19.780 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:19.780 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:19.780 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:19.780 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:20.037 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:20.037 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:20.037 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:20.294 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:20.294 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:20.294 [86/268] Linking static target lib/librte_ring.a 00:07:20.294 [87/268] Linking static target lib/librte_eal.a 00:07:20.552 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:20.809 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:20.809 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:20.809 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:20.809 [92/268] Linking static target lib/librte_mempool.a 00:07:21.066 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.066 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:21.066 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:21.066 [96/268] Linking static target lib/librte_rcu.a 00:07:21.066 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:21.323 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:21.323 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:21.581 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:21.838 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:21.838 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:21.838 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:21.838 [104/268] Linking static target lib/librte_mbuf.a 00:07:21.838 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:21.838 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:22.096 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:22.355 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:22.355 [109/268] Linking static target lib/librte_meter.a 00:07:22.355 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:22.355 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:22.355 [112/268] Linking static target lib/librte_net.a 00:07:22.627 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:22.886 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:22.886 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:23.144 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:23.144 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:23.402 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:23.661 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:23.661 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:23.920 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:24.199 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:24.458 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:24.458 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:24.717 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:24.717 [126/268] Linking static target lib/librte_pci.a 00:07:24.717 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:24.717 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:24.976 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:24.976 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:24.976 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:24.976 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:25.235 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:25.235 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.235 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:25.235 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:25.235 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:25.495 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:25.495 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:25.495 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:25.495 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:25.495 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:25.495 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:25.754 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:25.754 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:26.013 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:26.291 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:26.291 [148/268] Linking static target lib/librte_cmdline.a 00:07:26.552 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:26.552 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:26.811 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:27.070 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:27.070 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:27.070 [154/268] Linking static target lib/librte_ethdev.a 00:07:27.070 [155/268] Linking static target lib/librte_timer.a 00:07:27.328 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:27.328 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:27.586 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:27.845 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:27.845 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:27.845 [161/268] Linking static target lib/librte_compressdev.a 00:07:28.103 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:28.103 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:28.361 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:28.361 [165/268] Linking static target lib/librte_hash.a 00:07:28.361 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:28.619 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:28.619 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:28.619 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:28.619 [170/268] Linking static target lib/librte_dmadev.a 00:07:28.619 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:28.876 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:29.134 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:29.392 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:29.392 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:29.650 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:29.908 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:29.908 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:29.908 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:30.166 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:30.166 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:30.166 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:30.423 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:30.423 [184/268] Linking static target lib/librte_cryptodev.a 00:07:30.679 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:30.679 [186/268] Linking static target lib/librte_power.a 00:07:30.936 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:31.194 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:31.194 [189/268] Linking static target lib/librte_reorder.a 00:07:31.450 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:32.014 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:32.270 [192/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.270 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:32.270 [194/268] Linking static target lib/librte_security.a 00:07:32.527 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.527 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:33.456 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.456 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:33.456 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:33.714 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:33.971 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:34.283 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:34.283 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.283 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:34.283 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:34.850 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:34.850 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:35.107 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:35.107 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:35.366 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:35.366 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:35.366 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:35.366 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:35.366 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:35.366 [215/268] Linking static target drivers/librte_bus_vdev.a 00:07:35.933 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:35.934 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:35.934 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:35.934 [219/268] Linking static target drivers/librte_bus_pci.a 00:07:35.934 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:35.934 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.934 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:36.501 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:36.501 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:36.501 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:36.501 [226/268] Linking static target drivers/librte_mempool_ring.a 00:07:36.760 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:36.760 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:36.760 [229/268] Linking target lib/librte_eal.so.24.1 00:07:37.019 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:37.277 [231/268] Linking target lib/librte_ring.so.24.1 00:07:37.277 [232/268] Linking target lib/librte_meter.so.24.1 00:07:37.277 [233/268] Linking target lib/librte_pci.so.24.1 00:07:37.277 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:37.277 [235/268] Linking target lib/librte_timer.so.24.1 00:07:37.277 [236/268] Linking target lib/librte_dmadev.so.24.1 00:07:37.277 [237/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:37.277 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:37.557 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:37.557 [240/268] Linking target lib/librte_mempool.so.24.1 00:07:37.557 [241/268] Linking target lib/librte_rcu.so.24.1 00:07:37.557 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:37.557 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:37.557 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:37.557 [245/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:37.557 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:37.557 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:37.815 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:37.815 [249/268] Linking target lib/librte_mbuf.so.24.1 00:07:37.815 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:38.074 [251/268] Linking target lib/librte_compressdev.so.24.1 00:07:38.074 [252/268] Linking target lib/librte_reorder.so.24.1 00:07:38.074 [253/268] Linking target lib/librte_net.so.24.1 00:07:38.074 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:07:38.074 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:38.350 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:38.350 [257/268] Linking target lib/librte_cmdline.so.24.1 00:07:38.350 [258/268] Linking target lib/librte_hash.so.24.1 00:07:38.350 [259/268] Linking target lib/librte_security.so.24.1 00:07:38.350 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:38.916 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:38.917 [262/268] Linking target lib/librte_ethdev.so.24.1 00:07:39.175 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:39.175 [264/268] Linking target lib/librte_power.so.24.1 00:07:43.396 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:43.396 [266/268] Linking static target lib/librte_vhost.a 00:07:44.770 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.770 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:44.770 INFO: autodetecting backend as ninja 00:07:44.770 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:16.865 CC lib/log/log.o 00:08:16.865 CC lib/log/log_flags.o 00:08:16.865 CC lib/log/log_deprecated.o 00:08:16.865 CC lib/ut_mock/mock.o 00:08:16.865 CC lib/ut/ut.o 00:08:16.865 LIB libspdk_ut.a 00:08:16.865 LIB libspdk_log.a 00:08:16.865 LIB libspdk_ut_mock.a 00:08:16.865 SO libspdk_ut.so.2.0 00:08:16.865 SO libspdk_ut_mock.so.6.0 00:08:16.865 SO libspdk_log.so.7.1 00:08:16.865 SYMLINK libspdk_ut.so 00:08:16.865 SYMLINK libspdk_ut_mock.so 00:08:16.865 SYMLINK libspdk_log.so 00:08:16.865 CC lib/util/bit_array.o 00:08:16.865 CC lib/util/cpuset.o 00:08:16.865 CC lib/dma/dma.o 00:08:16.865 CC lib/util/base64.o 00:08:16.865 CC lib/util/crc32.o 00:08:16.865 CC lib/util/crc16.o 00:08:16.865 CC lib/util/crc32c.o 00:08:16.865 CC lib/ioat/ioat.o 00:08:16.865 CXX lib/trace_parser/trace.o 00:08:16.865 CC lib/vfio_user/host/vfio_user_pci.o 00:08:16.865 CC lib/util/crc32_ieee.o 00:08:16.865 CC lib/util/crc64.o 00:08:16.865 CC lib/util/dif.o 00:08:16.865 CC lib/vfio_user/host/vfio_user.o 00:08:16.865 LIB libspdk_dma.a 00:08:16.865 CC lib/util/fd.o 00:08:16.865 SO libspdk_dma.so.5.0 00:08:16.865 CC lib/util/fd_group.o 00:08:16.865 CC lib/util/file.o 00:08:16.865 CC lib/util/hexlify.o 00:08:16.865 SYMLINK libspdk_dma.so 00:08:16.865 LIB libspdk_ioat.a 00:08:16.865 CC lib/util/iov.o 00:08:16.865 SO libspdk_ioat.so.7.0 00:08:16.865 CC lib/util/math.o 00:08:16.865 SYMLINK libspdk_ioat.so 00:08:16.865 CC lib/util/net.o 00:08:16.865 CC lib/util/pipe.o 00:08:16.865 LIB libspdk_vfio_user.a 00:08:16.865 CC lib/util/strerror_tls.o 00:08:16.865 CC lib/util/string.o 00:08:16.865 SO libspdk_vfio_user.so.5.0 00:08:16.865 SYMLINK libspdk_vfio_user.so 00:08:16.865 CC lib/util/uuid.o 00:08:16.865 CC lib/util/xor.o 00:08:16.865 CC lib/util/zipf.o 00:08:16.865 CC lib/util/md5.o 00:08:16.865 LIB libspdk_util.a 00:08:16.865 SO libspdk_util.so.10.1 00:08:16.865 LIB libspdk_trace_parser.a 00:08:16.865 SO libspdk_trace_parser.so.6.0 00:08:16.865 SYMLINK libspdk_util.so 00:08:16.865 SYMLINK libspdk_trace_parser.so 00:08:16.865 CC lib/json/json_parse.o 00:08:16.865 CC lib/rdma_utils/rdma_utils.o 00:08:16.865 CC lib/json/json_util.o 00:08:16.865 CC lib/json/json_write.o 00:08:16.865 CC lib/vmd/vmd.o 00:08:16.865 CC lib/env_dpdk/env.o 00:08:16.865 CC lib/env_dpdk/memory.o 00:08:16.865 CC lib/vmd/led.o 00:08:16.865 CC lib/conf/conf.o 00:08:16.865 CC lib/idxd/idxd.o 00:08:16.865 CC lib/env_dpdk/pci.o 00:08:16.865 CC lib/idxd/idxd_user.o 00:08:16.865 CC lib/idxd/idxd_kernel.o 00:08:16.865 LIB libspdk_conf.a 00:08:16.865 SO libspdk_conf.so.6.0 00:08:16.865 LIB libspdk_rdma_utils.a 00:08:16.865 LIB libspdk_json.a 00:08:16.865 SO libspdk_rdma_utils.so.1.0 00:08:16.865 SO libspdk_json.so.6.0 00:08:16.865 SYMLINK libspdk_conf.so 00:08:16.865 CC lib/env_dpdk/init.o 00:08:16.865 CC lib/env_dpdk/threads.o 00:08:16.865 SYMLINK libspdk_rdma_utils.so 00:08:16.865 CC lib/env_dpdk/pci_ioat.o 00:08:16.865 SYMLINK libspdk_json.so 00:08:16.865 CC lib/env_dpdk/pci_virtio.o 00:08:16.865 CC lib/env_dpdk/pci_vmd.o 00:08:16.865 CC lib/env_dpdk/pci_idxd.o 00:08:16.865 CC lib/rdma_provider/common.o 00:08:16.865 CC lib/jsonrpc/jsonrpc_server.o 00:08:16.865 CC lib/env_dpdk/pci_event.o 00:08:16.865 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:16.865 CC lib/env_dpdk/sigbus_handler.o 00:08:16.866 CC lib/env_dpdk/pci_dpdk.o 00:08:16.866 LIB libspdk_idxd.a 00:08:16.866 LIB libspdk_vmd.a 00:08:16.866 SO libspdk_idxd.so.12.1 00:08:16.866 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:16.866 SO libspdk_vmd.so.6.0 00:08:16.866 CC lib/jsonrpc/jsonrpc_client.o 00:08:16.866 SYMLINK libspdk_idxd.so 00:08:16.866 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:16.866 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:16.866 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:16.866 SYMLINK libspdk_vmd.so 00:08:16.866 LIB libspdk_rdma_provider.a 00:08:16.866 SO libspdk_rdma_provider.so.7.0 00:08:16.866 SYMLINK libspdk_rdma_provider.so 00:08:16.866 LIB libspdk_jsonrpc.a 00:08:16.866 SO libspdk_jsonrpc.so.6.0 00:08:16.866 SYMLINK libspdk_jsonrpc.so 00:08:17.125 CC lib/rpc/rpc.o 00:08:17.384 LIB libspdk_rpc.a 00:08:17.384 SO libspdk_rpc.so.6.0 00:08:17.643 SYMLINK libspdk_rpc.so 00:08:17.643 LIB libspdk_env_dpdk.a 00:08:17.643 SO libspdk_env_dpdk.so.15.1 00:08:17.901 CC lib/keyring/keyring_rpc.o 00:08:17.901 CC lib/keyring/keyring.o 00:08:17.901 CC lib/trace/trace.o 00:08:17.901 CC lib/trace/trace_flags.o 00:08:17.901 CC lib/trace/trace_rpc.o 00:08:17.901 CC lib/notify/notify.o 00:08:17.901 CC lib/notify/notify_rpc.o 00:08:17.901 SYMLINK libspdk_env_dpdk.so 00:08:17.901 LIB libspdk_notify.a 00:08:18.162 SO libspdk_notify.so.6.0 00:08:18.162 LIB libspdk_keyring.a 00:08:18.162 SO libspdk_keyring.so.2.0 00:08:18.162 LIB libspdk_trace.a 00:08:18.162 SYMLINK libspdk_notify.so 00:08:18.162 SO libspdk_trace.so.11.0 00:08:18.162 SYMLINK libspdk_keyring.so 00:08:18.162 SYMLINK libspdk_trace.so 00:08:18.419 CC lib/thread/thread.o 00:08:18.419 CC lib/thread/iobuf.o 00:08:18.419 CC lib/sock/sock.o 00:08:18.419 CC lib/sock/sock_rpc.o 00:08:18.984 LIB libspdk_sock.a 00:08:18.984 SO libspdk_sock.so.10.0 00:08:19.242 SYMLINK libspdk_sock.so 00:08:19.500 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:19.500 CC lib/nvme/nvme_ctrlr.o 00:08:19.500 CC lib/nvme/nvme_fabric.o 00:08:19.500 CC lib/nvme/nvme_ns_cmd.o 00:08:19.500 CC lib/nvme/nvme_pcie_common.o 00:08:19.500 CC lib/nvme/nvme_ns.o 00:08:19.500 CC lib/nvme/nvme_pcie.o 00:08:19.500 CC lib/nvme/nvme_qpair.o 00:08:19.500 CC lib/nvme/nvme.o 00:08:20.434 CC lib/nvme/nvme_quirks.o 00:08:20.434 CC lib/nvme/nvme_transport.o 00:08:20.434 CC lib/nvme/nvme_discovery.o 00:08:20.691 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:20.691 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:20.691 CC lib/nvme/nvme_tcp.o 00:08:20.691 CC lib/nvme/nvme_opal.o 00:08:20.691 LIB libspdk_thread.a 00:08:20.691 SO libspdk_thread.so.11.0 00:08:20.950 SYMLINK libspdk_thread.so 00:08:20.950 CC lib/nvme/nvme_io_msg.o 00:08:20.950 CC lib/nvme/nvme_poll_group.o 00:08:20.950 CC lib/nvme/nvme_zns.o 00:08:20.950 CC lib/nvme/nvme_stubs.o 00:08:21.207 CC lib/nvme/nvme_auth.o 00:08:21.207 CC lib/nvme/nvme_cuse.o 00:08:21.207 CC lib/nvme/nvme_rdma.o 00:08:21.465 CC lib/accel/accel.o 00:08:21.465 CC lib/accel/accel_rpc.o 00:08:21.723 CC lib/accel/accel_sw.o 00:08:21.981 CC lib/blob/blobstore.o 00:08:21.981 CC lib/init/json_config.o 00:08:21.981 CC lib/virtio/virtio.o 00:08:21.981 CC lib/virtio/virtio_vhost_user.o 00:08:22.238 CC lib/init/subsystem.o 00:08:22.238 CC lib/blob/request.o 00:08:22.497 CC lib/init/subsystem_rpc.o 00:08:22.497 CC lib/virtio/virtio_vfio_user.o 00:08:22.497 CC lib/blob/zeroes.o 00:08:22.497 CC lib/blob/blob_bs_dev.o 00:08:22.497 CC lib/virtio/virtio_pci.o 00:08:22.755 CC lib/init/rpc.o 00:08:22.756 CC lib/fsdev/fsdev.o 00:08:22.756 CC lib/fsdev/fsdev_rpc.o 00:08:22.756 CC lib/fsdev/fsdev_io.o 00:08:22.756 LIB libspdk_init.a 00:08:22.756 LIB libspdk_accel.a 00:08:23.015 SO libspdk_init.so.6.0 00:08:23.015 LIB libspdk_virtio.a 00:08:23.015 SO libspdk_accel.so.16.0 00:08:23.015 SO libspdk_virtio.so.7.0 00:08:23.015 SYMLINK libspdk_init.so 00:08:23.015 SYMLINK libspdk_virtio.so 00:08:23.015 SYMLINK libspdk_accel.so 00:08:23.273 CC lib/event/app.o 00:08:23.273 CC lib/event/reactor.o 00:08:23.273 CC lib/event/log_rpc.o 00:08:23.273 CC lib/event/app_rpc.o 00:08:23.273 CC lib/event/scheduler_static.o 00:08:23.273 LIB libspdk_nvme.a 00:08:23.273 CC lib/bdev/bdev.o 00:08:23.273 CC lib/bdev/bdev_rpc.o 00:08:23.273 CC lib/bdev/bdev_zone.o 00:08:23.273 CC lib/bdev/part.o 00:08:23.531 SO libspdk_nvme.so.15.0 00:08:23.531 CC lib/bdev/scsi_nvme.o 00:08:23.531 LIB libspdk_fsdev.a 00:08:23.788 SO libspdk_fsdev.so.2.0 00:08:23.788 SYMLINK libspdk_fsdev.so 00:08:23.788 LIB libspdk_event.a 00:08:23.788 SYMLINK libspdk_nvme.so 00:08:23.788 SO libspdk_event.so.14.0 00:08:24.045 SYMLINK libspdk_event.so 00:08:24.045 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:24.978 LIB libspdk_fuse_dispatcher.a 00:08:24.978 SO libspdk_fuse_dispatcher.so.1.0 00:08:25.237 SYMLINK libspdk_fuse_dispatcher.so 00:08:26.609 LIB libspdk_blob.a 00:08:26.609 SO libspdk_blob.so.12.0 00:08:26.609 SYMLINK libspdk_blob.so 00:08:26.868 LIB libspdk_bdev.a 00:08:26.868 CC lib/blobfs/blobfs.o 00:08:26.868 CC lib/lvol/lvol.o 00:08:26.868 CC lib/blobfs/tree.o 00:08:26.868 SO libspdk_bdev.so.17.0 00:08:27.126 SYMLINK libspdk_bdev.so 00:08:27.386 CC lib/nvmf/ctrlr.o 00:08:27.386 CC lib/nbd/nbd.o 00:08:27.386 CC lib/nbd/nbd_rpc.o 00:08:27.386 CC lib/nvmf/ctrlr_discovery.o 00:08:27.386 CC lib/nvmf/ctrlr_bdev.o 00:08:27.386 CC lib/ftl/ftl_core.o 00:08:27.386 CC lib/scsi/dev.o 00:08:27.386 CC lib/ublk/ublk.o 00:08:27.644 CC lib/ftl/ftl_init.o 00:08:27.644 CC lib/scsi/lun.o 00:08:27.903 LIB libspdk_nbd.a 00:08:27.903 CC lib/scsi/port.o 00:08:27.903 SO libspdk_nbd.so.7.0 00:08:27.903 CC lib/ftl/ftl_layout.o 00:08:27.903 SYMLINK libspdk_nbd.so 00:08:27.903 CC lib/ftl/ftl_debug.o 00:08:28.161 LIB libspdk_blobfs.a 00:08:28.161 CC lib/scsi/scsi.o 00:08:28.161 SO libspdk_blobfs.so.11.0 00:08:28.161 CC lib/scsi/scsi_bdev.o 00:08:28.161 CC lib/scsi/scsi_pr.o 00:08:28.161 SYMLINK libspdk_blobfs.so 00:08:28.161 CC lib/ftl/ftl_io.o 00:08:28.419 CC lib/nvmf/subsystem.o 00:08:28.419 CC lib/nvmf/nvmf.o 00:08:28.419 LIB libspdk_lvol.a 00:08:28.419 SO libspdk_lvol.so.11.0 00:08:28.419 CC lib/scsi/scsi_rpc.o 00:08:28.419 CC lib/ublk/ublk_rpc.o 00:08:28.419 SYMLINK libspdk_lvol.so 00:08:28.419 CC lib/scsi/task.o 00:08:28.419 CC lib/nvmf/nvmf_rpc.o 00:08:28.419 CC lib/ftl/ftl_sb.o 00:08:28.691 CC lib/ftl/ftl_l2p.o 00:08:28.691 CC lib/nvmf/transport.o 00:08:28.691 LIB libspdk_ublk.a 00:08:28.691 SO libspdk_ublk.so.3.0 00:08:28.691 CC lib/nvmf/tcp.o 00:08:28.691 SYMLINK libspdk_ublk.so 00:08:28.691 CC lib/nvmf/stubs.o 00:08:28.691 CC lib/ftl/ftl_l2p_flat.o 00:08:28.949 CC lib/nvmf/mdns_server.o 00:08:28.949 LIB libspdk_scsi.a 00:08:28.949 SO libspdk_scsi.so.9.0 00:08:28.949 SYMLINK libspdk_scsi.so 00:08:28.949 CC lib/nvmf/rdma.o 00:08:28.949 CC lib/ftl/ftl_nv_cache.o 00:08:29.516 CC lib/nvmf/auth.o 00:08:29.516 CC lib/ftl/ftl_band.o 00:08:29.774 CC lib/iscsi/conn.o 00:08:29.774 CC lib/vhost/vhost.o 00:08:29.774 CC lib/ftl/ftl_band_ops.o 00:08:30.034 CC lib/ftl/ftl_writer.o 00:08:30.034 CC lib/ftl/ftl_rq.o 00:08:30.034 CC lib/ftl/ftl_reloc.o 00:08:30.292 CC lib/ftl/ftl_l2p_cache.o 00:08:30.292 CC lib/ftl/ftl_p2l.o 00:08:30.292 CC lib/ftl/ftl_p2l_log.o 00:08:30.292 CC lib/ftl/mngt/ftl_mngt.o 00:08:30.551 CC lib/iscsi/init_grp.o 00:08:30.551 CC lib/vhost/vhost_rpc.o 00:08:30.551 CC lib/iscsi/iscsi.o 00:08:30.810 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:30.810 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:30.810 CC lib/vhost/vhost_scsi.o 00:08:30.810 CC lib/iscsi/param.o 00:08:30.810 CC lib/iscsi/portal_grp.o 00:08:30.810 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:30.810 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:31.069 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:31.069 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:31.069 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:31.069 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:31.329 CC lib/iscsi/tgt_node.o 00:08:31.329 CC lib/iscsi/iscsi_subsystem.o 00:08:31.329 CC lib/iscsi/iscsi_rpc.o 00:08:31.329 CC lib/iscsi/task.o 00:08:31.329 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:31.329 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:31.587 CC lib/vhost/vhost_blk.o 00:08:31.587 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:31.587 CC lib/vhost/rte_vhost_user.o 00:08:31.587 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:31.846 CC lib/ftl/utils/ftl_conf.o 00:08:31.846 CC lib/ftl/utils/ftl_md.o 00:08:31.846 CC lib/ftl/utils/ftl_mempool.o 00:08:31.846 CC lib/ftl/utils/ftl_bitmap.o 00:08:31.846 CC lib/ftl/utils/ftl_property.o 00:08:32.105 LIB libspdk_nvmf.a 00:08:32.105 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:32.105 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:32.105 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:32.105 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:32.105 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:32.365 SO libspdk_nvmf.so.20.0 00:08:32.365 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:32.365 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:32.365 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:32.365 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:32.365 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:32.365 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:32.623 SYMLINK libspdk_nvmf.so 00:08:32.623 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:32.623 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:32.623 CC lib/ftl/base/ftl_base_dev.o 00:08:32.623 CC lib/ftl/base/ftl_base_bdev.o 00:08:32.623 CC lib/ftl/ftl_trace.o 00:08:32.623 LIB libspdk_iscsi.a 00:08:32.623 SO libspdk_iscsi.so.8.0 00:08:32.882 LIB libspdk_ftl.a 00:08:32.882 SYMLINK libspdk_iscsi.so 00:08:33.141 LIB libspdk_vhost.a 00:08:33.141 SO libspdk_vhost.so.8.0 00:08:33.141 SO libspdk_ftl.so.9.0 00:08:33.141 SYMLINK libspdk_vhost.so 00:08:33.400 SYMLINK libspdk_ftl.so 00:08:33.967 CC module/env_dpdk/env_dpdk_rpc.o 00:08:33.967 CC module/keyring/file/keyring.o 00:08:33.967 CC module/accel/error/accel_error.o 00:08:33.967 CC module/fsdev/aio/fsdev_aio.o 00:08:33.967 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:33.967 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:33.967 CC module/scheduler/gscheduler/gscheduler.o 00:08:33.967 CC module/sock/posix/posix.o 00:08:33.967 CC module/keyring/linux/keyring.o 00:08:33.967 CC module/blob/bdev/blob_bdev.o 00:08:33.967 LIB libspdk_env_dpdk_rpc.a 00:08:33.967 SO libspdk_env_dpdk_rpc.so.6.0 00:08:34.226 SYMLINK libspdk_env_dpdk_rpc.so 00:08:34.226 CC module/keyring/linux/keyring_rpc.o 00:08:34.226 CC module/keyring/file/keyring_rpc.o 00:08:34.226 LIB libspdk_scheduler_gscheduler.a 00:08:34.226 LIB libspdk_scheduler_dpdk_governor.a 00:08:34.226 CC module/accel/error/accel_error_rpc.o 00:08:34.226 SO libspdk_scheduler_gscheduler.so.4.0 00:08:34.226 LIB libspdk_scheduler_dynamic.a 00:08:34.226 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:34.226 SO libspdk_scheduler_dynamic.so.4.0 00:08:34.226 LIB libspdk_keyring_linux.a 00:08:34.226 SYMLINK libspdk_scheduler_gscheduler.so 00:08:34.226 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:34.226 SYMLINK libspdk_scheduler_dynamic.so 00:08:34.226 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:34.226 SO libspdk_keyring_linux.so.1.0 00:08:34.485 LIB libspdk_keyring_file.a 00:08:34.485 LIB libspdk_blob_bdev.a 00:08:34.485 CC module/accel/ioat/accel_ioat.o 00:08:34.485 SO libspdk_keyring_file.so.2.0 00:08:34.485 LIB libspdk_accel_error.a 00:08:34.485 SO libspdk_blob_bdev.so.12.0 00:08:34.485 SYMLINK libspdk_keyring_linux.so 00:08:34.485 CC module/accel/ioat/accel_ioat_rpc.o 00:08:34.485 SO libspdk_accel_error.so.2.0 00:08:34.485 SYMLINK libspdk_keyring_file.so 00:08:34.485 SYMLINK libspdk_blob_bdev.so 00:08:34.485 CC module/fsdev/aio/linux_aio_mgr.o 00:08:34.485 CC module/accel/dsa/accel_dsa.o 00:08:34.485 CC module/accel/iaa/accel_iaa.o 00:08:34.485 SYMLINK libspdk_accel_error.so 00:08:34.485 CC module/accel/dsa/accel_dsa_rpc.o 00:08:34.485 CC module/accel/iaa/accel_iaa_rpc.o 00:08:34.744 LIB libspdk_accel_ioat.a 00:08:34.744 SO libspdk_accel_ioat.so.6.0 00:08:34.744 SYMLINK libspdk_accel_ioat.so 00:08:34.744 LIB libspdk_accel_iaa.a 00:08:34.744 CC module/bdev/delay/vbdev_delay.o 00:08:34.744 CC module/blobfs/bdev/blobfs_bdev.o 00:08:34.744 SO libspdk_accel_iaa.so.3.0 00:08:34.744 LIB libspdk_accel_dsa.a 00:08:35.001 CC module/bdev/error/vbdev_error.o 00:08:35.001 SO libspdk_accel_dsa.so.5.0 00:08:35.001 SYMLINK libspdk_accel_iaa.so 00:08:35.001 CC module/bdev/error/vbdev_error_rpc.o 00:08:35.001 CC module/bdev/lvol/vbdev_lvol.o 00:08:35.001 CC module/bdev/gpt/gpt.o 00:08:35.001 LIB libspdk_fsdev_aio.a 00:08:35.001 CC module/bdev/malloc/bdev_malloc.o 00:08:35.001 LIB libspdk_sock_posix.a 00:08:35.001 SYMLINK libspdk_accel_dsa.so 00:08:35.001 CC module/bdev/gpt/vbdev_gpt.o 00:08:35.001 SO libspdk_fsdev_aio.so.1.0 00:08:35.001 SO libspdk_sock_posix.so.6.0 00:08:35.001 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:35.001 SYMLINK libspdk_fsdev_aio.so 00:08:35.001 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:35.258 SYMLINK libspdk_sock_posix.so 00:08:35.258 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:35.258 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:35.258 LIB libspdk_bdev_error.a 00:08:35.258 SO libspdk_bdev_error.so.6.0 00:08:35.258 LIB libspdk_blobfs_bdev.a 00:08:35.258 LIB libspdk_bdev_gpt.a 00:08:35.258 SO libspdk_blobfs_bdev.so.6.0 00:08:35.258 SO libspdk_bdev_gpt.so.6.0 00:08:35.258 SYMLINK libspdk_bdev_error.so 00:08:35.258 LIB libspdk_bdev_delay.a 00:08:35.516 CC module/bdev/null/bdev_null.o 00:08:35.516 SO libspdk_bdev_delay.so.6.0 00:08:35.516 SYMLINK libspdk_blobfs_bdev.so 00:08:35.516 CC module/bdev/null/bdev_null_rpc.o 00:08:35.516 SYMLINK libspdk_bdev_gpt.so 00:08:35.516 CC module/bdev/nvme/bdev_nvme.o 00:08:35.516 CC module/bdev/passthru/vbdev_passthru.o 00:08:35.516 LIB libspdk_bdev_malloc.a 00:08:35.516 SYMLINK libspdk_bdev_delay.so 00:08:35.516 CC module/bdev/raid/bdev_raid.o 00:08:35.516 SO libspdk_bdev_malloc.so.6.0 00:08:35.516 CC module/bdev/split/vbdev_split.o 00:08:35.516 CC module/bdev/raid/bdev_raid_rpc.o 00:08:35.516 CC module/bdev/split/vbdev_split_rpc.o 00:08:35.773 SYMLINK libspdk_bdev_malloc.so 00:08:35.773 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:35.773 LIB libspdk_bdev_lvol.a 00:08:35.773 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:35.773 SO libspdk_bdev_lvol.so.6.0 00:08:35.773 LIB libspdk_bdev_null.a 00:08:35.773 SO libspdk_bdev_null.so.6.0 00:08:35.773 SYMLINK libspdk_bdev_lvol.so 00:08:35.774 CC module/bdev/raid/bdev_raid_sb.o 00:08:35.774 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:35.774 CC module/bdev/raid/raid0.o 00:08:35.774 SYMLINK libspdk_bdev_null.so 00:08:35.774 CC module/bdev/nvme/nvme_rpc.o 00:08:35.774 CC module/bdev/raid/raid1.o 00:08:36.031 LIB libspdk_bdev_split.a 00:08:36.031 LIB libspdk_bdev_passthru.a 00:08:36.031 SO libspdk_bdev_split.so.6.0 00:08:36.031 SO libspdk_bdev_passthru.so.6.0 00:08:36.031 SYMLINK libspdk_bdev_passthru.so 00:08:36.031 SYMLINK libspdk_bdev_split.so 00:08:36.031 CC module/bdev/nvme/bdev_mdns_client.o 00:08:36.031 CC module/bdev/nvme/vbdev_opal.o 00:08:36.031 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:36.356 CC module/bdev/raid/concat.o 00:08:36.356 CC module/bdev/raid/raid5f.o 00:08:36.356 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:36.356 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:36.356 LIB libspdk_bdev_zone_block.a 00:08:36.356 SO libspdk_bdev_zone_block.so.6.0 00:08:36.356 CC module/bdev/aio/bdev_aio.o 00:08:36.356 SYMLINK libspdk_bdev_zone_block.so 00:08:36.356 CC module/bdev/aio/bdev_aio_rpc.o 00:08:36.614 CC module/bdev/ftl/bdev_ftl.o 00:08:36.614 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:36.614 CC module/bdev/iscsi/bdev_iscsi.o 00:08:36.614 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:36.614 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:36.614 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:36.871 LIB libspdk_bdev_aio.a 00:08:36.871 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:36.871 SO libspdk_bdev_aio.so.6.0 00:08:36.871 SYMLINK libspdk_bdev_aio.so 00:08:36.871 LIB libspdk_bdev_ftl.a 00:08:36.871 LIB libspdk_bdev_raid.a 00:08:37.129 SO libspdk_bdev_ftl.so.6.0 00:08:37.129 SO libspdk_bdev_raid.so.6.0 00:08:37.129 SYMLINK libspdk_bdev_ftl.so 00:08:37.129 LIB libspdk_bdev_iscsi.a 00:08:37.129 SO libspdk_bdev_iscsi.so.6.0 00:08:37.129 SYMLINK libspdk_bdev_raid.so 00:08:37.129 SYMLINK libspdk_bdev_iscsi.so 00:08:37.386 LIB libspdk_bdev_virtio.a 00:08:37.386 SO libspdk_bdev_virtio.so.6.0 00:08:37.679 SYMLINK libspdk_bdev_virtio.so 00:08:39.628 LIB libspdk_bdev_nvme.a 00:08:39.628 SO libspdk_bdev_nvme.so.7.1 00:08:39.628 SYMLINK libspdk_bdev_nvme.so 00:08:40.196 CC module/event/subsystems/keyring/keyring.o 00:08:40.196 CC module/event/subsystems/sock/sock.o 00:08:40.196 CC module/event/subsystems/scheduler/scheduler.o 00:08:40.196 CC module/event/subsystems/iobuf/iobuf.o 00:08:40.196 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:40.196 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:40.196 CC module/event/subsystems/fsdev/fsdev.o 00:08:40.196 CC module/event/subsystems/vmd/vmd.o 00:08:40.196 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:40.196 LIB libspdk_event_keyring.a 00:08:40.455 LIB libspdk_event_sock.a 00:08:40.455 LIB libspdk_event_vhost_blk.a 00:08:40.455 LIB libspdk_event_fsdev.a 00:08:40.455 SO libspdk_event_keyring.so.1.0 00:08:40.455 SO libspdk_event_sock.so.5.0 00:08:40.455 LIB libspdk_event_scheduler.a 00:08:40.455 SO libspdk_event_vhost_blk.so.3.0 00:08:40.455 LIB libspdk_event_iobuf.a 00:08:40.455 SO libspdk_event_fsdev.so.1.0 00:08:40.455 LIB libspdk_event_vmd.a 00:08:40.455 SO libspdk_event_scheduler.so.4.0 00:08:40.455 SYMLINK libspdk_event_keyring.so 00:08:40.455 SO libspdk_event_iobuf.so.3.0 00:08:40.455 SO libspdk_event_vmd.so.6.0 00:08:40.455 SYMLINK libspdk_event_vhost_blk.so 00:08:40.455 SYMLINK libspdk_event_sock.so 00:08:40.455 SYMLINK libspdk_event_fsdev.so 00:08:40.455 SYMLINK libspdk_event_iobuf.so 00:08:40.455 SYMLINK libspdk_event_scheduler.so 00:08:40.455 SYMLINK libspdk_event_vmd.so 00:08:40.714 CC module/event/subsystems/accel/accel.o 00:08:40.972 LIB libspdk_event_accel.a 00:08:40.972 SO libspdk_event_accel.so.6.0 00:08:41.231 SYMLINK libspdk_event_accel.so 00:08:41.490 CC module/event/subsystems/bdev/bdev.o 00:08:41.749 LIB libspdk_event_bdev.a 00:08:41.749 SO libspdk_event_bdev.so.6.0 00:08:41.749 SYMLINK libspdk_event_bdev.so 00:08:42.008 CC module/event/subsystems/nbd/nbd.o 00:08:42.008 CC module/event/subsystems/scsi/scsi.o 00:08:42.008 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:42.008 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:42.008 CC module/event/subsystems/ublk/ublk.o 00:08:42.273 LIB libspdk_event_ublk.a 00:08:42.273 LIB libspdk_event_nbd.a 00:08:42.273 LIB libspdk_event_scsi.a 00:08:42.273 SO libspdk_event_ublk.so.3.0 00:08:42.273 SO libspdk_event_nbd.so.6.0 00:08:42.273 SO libspdk_event_scsi.so.6.0 00:08:42.273 LIB libspdk_event_nvmf.a 00:08:42.273 SYMLINK libspdk_event_nbd.so 00:08:42.273 SYMLINK libspdk_event_ublk.so 00:08:42.273 SYMLINK libspdk_event_scsi.so 00:08:42.273 SO libspdk_event_nvmf.so.6.0 00:08:42.552 SYMLINK libspdk_event_nvmf.so 00:08:42.552 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:42.552 CC module/event/subsystems/iscsi/iscsi.o 00:08:42.810 LIB libspdk_event_vhost_scsi.a 00:08:42.810 SO libspdk_event_vhost_scsi.so.3.0 00:08:42.810 LIB libspdk_event_iscsi.a 00:08:42.810 SO libspdk_event_iscsi.so.6.0 00:08:42.810 SYMLINK libspdk_event_vhost_scsi.so 00:08:42.810 SYMLINK libspdk_event_iscsi.so 00:08:43.068 SO libspdk.so.6.0 00:08:43.068 SYMLINK libspdk.so 00:08:43.327 CC app/spdk_lspci/spdk_lspci.o 00:08:43.327 CXX app/trace/trace.o 00:08:43.327 CC app/trace_record/trace_record.o 00:08:43.327 CC app/nvmf_tgt/nvmf_main.o 00:08:43.327 CC app/iscsi_tgt/iscsi_tgt.o 00:08:43.327 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:43.327 CC app/spdk_tgt/spdk_tgt.o 00:08:43.327 CC examples/util/zipf/zipf.o 00:08:43.327 CC test/thread/poller_perf/poller_perf.o 00:08:43.586 CC examples/ioat/perf/perf.o 00:08:43.586 LINK spdk_lspci 00:08:43.586 LINK nvmf_tgt 00:08:43.586 LINK interrupt_tgt 00:08:43.586 LINK poller_perf 00:08:43.586 LINK iscsi_tgt 00:08:43.586 LINK zipf 00:08:43.586 LINK spdk_tgt 00:08:43.845 LINK spdk_trace_record 00:08:43.845 LINK ioat_perf 00:08:43.845 CC examples/ioat/verify/verify.o 00:08:43.845 LINK spdk_trace 00:08:43.845 CC app/spdk_nvme_perf/perf.o 00:08:44.104 CC app/spdk_nvme_identify/identify.o 00:08:44.104 CC app/spdk_nvme_discover/discovery_aer.o 00:08:44.104 CC app/spdk_top/spdk_top.o 00:08:44.104 TEST_HEADER include/spdk/accel.h 00:08:44.104 TEST_HEADER include/spdk/accel_module.h 00:08:44.104 TEST_HEADER include/spdk/assert.h 00:08:44.104 TEST_HEADER include/spdk/barrier.h 00:08:44.104 TEST_HEADER include/spdk/base64.h 00:08:44.104 TEST_HEADER include/spdk/bdev.h 00:08:44.104 TEST_HEADER include/spdk/bdev_module.h 00:08:44.104 TEST_HEADER include/spdk/bdev_zone.h 00:08:44.104 TEST_HEADER include/spdk/bit_array.h 00:08:44.104 TEST_HEADER include/spdk/bit_pool.h 00:08:44.104 TEST_HEADER include/spdk/blob_bdev.h 00:08:44.104 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:44.104 TEST_HEADER include/spdk/blobfs.h 00:08:44.104 TEST_HEADER include/spdk/blob.h 00:08:44.104 TEST_HEADER include/spdk/conf.h 00:08:44.104 TEST_HEADER include/spdk/config.h 00:08:44.104 CC test/dma/test_dma/test_dma.o 00:08:44.104 TEST_HEADER include/spdk/cpuset.h 00:08:44.104 CC app/spdk_dd/spdk_dd.o 00:08:44.104 TEST_HEADER include/spdk/crc16.h 00:08:44.104 TEST_HEADER include/spdk/crc32.h 00:08:44.104 TEST_HEADER include/spdk/crc64.h 00:08:44.104 TEST_HEADER include/spdk/dif.h 00:08:44.104 TEST_HEADER include/spdk/dma.h 00:08:44.104 TEST_HEADER include/spdk/endian.h 00:08:44.104 TEST_HEADER include/spdk/env_dpdk.h 00:08:44.104 TEST_HEADER include/spdk/env.h 00:08:44.104 TEST_HEADER include/spdk/event.h 00:08:44.104 TEST_HEADER include/spdk/fd_group.h 00:08:44.104 TEST_HEADER include/spdk/fd.h 00:08:44.104 CC test/app/bdev_svc/bdev_svc.o 00:08:44.104 TEST_HEADER include/spdk/file.h 00:08:44.104 LINK verify 00:08:44.104 TEST_HEADER include/spdk/fsdev.h 00:08:44.104 TEST_HEADER include/spdk/fsdev_module.h 00:08:44.104 TEST_HEADER include/spdk/ftl.h 00:08:44.104 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:44.104 TEST_HEADER include/spdk/gpt_spec.h 00:08:44.104 TEST_HEADER include/spdk/hexlify.h 00:08:44.104 TEST_HEADER include/spdk/histogram_data.h 00:08:44.104 TEST_HEADER include/spdk/idxd.h 00:08:44.104 TEST_HEADER include/spdk/idxd_spec.h 00:08:44.104 TEST_HEADER include/spdk/init.h 00:08:44.104 TEST_HEADER include/spdk/ioat.h 00:08:44.104 TEST_HEADER include/spdk/ioat_spec.h 00:08:44.104 TEST_HEADER include/spdk/iscsi_spec.h 00:08:44.104 TEST_HEADER include/spdk/json.h 00:08:44.104 TEST_HEADER include/spdk/jsonrpc.h 00:08:44.104 TEST_HEADER include/spdk/keyring.h 00:08:44.104 TEST_HEADER include/spdk/keyring_module.h 00:08:44.104 TEST_HEADER include/spdk/likely.h 00:08:44.104 TEST_HEADER include/spdk/log.h 00:08:44.104 TEST_HEADER include/spdk/lvol.h 00:08:44.104 TEST_HEADER include/spdk/md5.h 00:08:44.104 TEST_HEADER include/spdk/memory.h 00:08:44.104 TEST_HEADER include/spdk/mmio.h 00:08:44.104 TEST_HEADER include/spdk/nbd.h 00:08:44.104 TEST_HEADER include/spdk/net.h 00:08:44.104 TEST_HEADER include/spdk/notify.h 00:08:44.104 TEST_HEADER include/spdk/nvme.h 00:08:44.363 TEST_HEADER include/spdk/nvme_intel.h 00:08:44.363 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:44.363 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:44.363 TEST_HEADER include/spdk/nvme_spec.h 00:08:44.363 TEST_HEADER include/spdk/nvme_zns.h 00:08:44.363 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:44.363 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:44.363 TEST_HEADER include/spdk/nvmf.h 00:08:44.363 TEST_HEADER include/spdk/nvmf_spec.h 00:08:44.363 TEST_HEADER include/spdk/nvmf_transport.h 00:08:44.363 TEST_HEADER include/spdk/opal.h 00:08:44.363 TEST_HEADER include/spdk/opal_spec.h 00:08:44.363 TEST_HEADER include/spdk/pci_ids.h 00:08:44.363 TEST_HEADER include/spdk/pipe.h 00:08:44.363 TEST_HEADER include/spdk/queue.h 00:08:44.363 TEST_HEADER include/spdk/reduce.h 00:08:44.363 TEST_HEADER include/spdk/rpc.h 00:08:44.363 TEST_HEADER include/spdk/scheduler.h 00:08:44.363 TEST_HEADER include/spdk/scsi.h 00:08:44.363 TEST_HEADER include/spdk/scsi_spec.h 00:08:44.363 TEST_HEADER include/spdk/sock.h 00:08:44.363 TEST_HEADER include/spdk/stdinc.h 00:08:44.363 LINK spdk_nvme_discover 00:08:44.363 TEST_HEADER include/spdk/string.h 00:08:44.363 TEST_HEADER include/spdk/thread.h 00:08:44.363 TEST_HEADER include/spdk/trace.h 00:08:44.363 TEST_HEADER include/spdk/trace_parser.h 00:08:44.363 TEST_HEADER include/spdk/tree.h 00:08:44.363 TEST_HEADER include/spdk/ublk.h 00:08:44.363 TEST_HEADER include/spdk/util.h 00:08:44.363 TEST_HEADER include/spdk/uuid.h 00:08:44.363 TEST_HEADER include/spdk/version.h 00:08:44.363 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:44.363 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:44.363 TEST_HEADER include/spdk/vhost.h 00:08:44.363 TEST_HEADER include/spdk/vmd.h 00:08:44.363 TEST_HEADER include/spdk/xor.h 00:08:44.363 TEST_HEADER include/spdk/zipf.h 00:08:44.363 CXX test/cpp_headers/accel.o 00:08:44.363 LINK bdev_svc 00:08:44.363 CC examples/thread/thread/thread_ex.o 00:08:44.622 CXX test/cpp_headers/accel_module.o 00:08:44.622 LINK spdk_dd 00:08:44.622 CC test/env/mem_callbacks/mem_callbacks.o 00:08:44.622 CC examples/sock/hello_world/hello_sock.o 00:08:44.880 LINK thread 00:08:44.880 LINK test_dma 00:08:44.880 CXX test/cpp_headers/assert.o 00:08:44.880 CXX test/cpp_headers/barrier.o 00:08:44.880 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:45.139 CXX test/cpp_headers/base64.o 00:08:45.139 LINK hello_sock 00:08:45.139 LINK spdk_nvme_perf 00:08:45.139 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:45.139 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:45.139 LINK spdk_nvme_identify 00:08:45.139 CC examples/vmd/lsvmd/lsvmd.o 00:08:45.396 LINK spdk_top 00:08:45.396 CXX test/cpp_headers/bdev.o 00:08:45.396 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:45.396 LINK mem_callbacks 00:08:45.396 LINK nvme_fuzz 00:08:45.397 LINK lsvmd 00:08:45.397 CC test/event/event_perf/event_perf.o 00:08:45.397 CXX test/cpp_headers/bdev_module.o 00:08:45.654 CC test/nvme/aer/aer.o 00:08:45.654 CC test/env/vtophys/vtophys.o 00:08:45.654 CC test/rpc_client/rpc_client_test.o 00:08:45.654 LINK event_perf 00:08:45.654 LINK vtophys 00:08:45.654 CC examples/vmd/led/led.o 00:08:45.654 CC app/fio/nvme/fio_plugin.o 00:08:45.654 CXX test/cpp_headers/bdev_zone.o 00:08:45.911 LINK rpc_client_test 00:08:45.911 LINK vhost_fuzz 00:08:45.911 CC test/accel/dif/dif.o 00:08:45.911 LINK aer 00:08:45.911 LINK led 00:08:45.911 CC test/event/reactor/reactor.o 00:08:45.911 CXX test/cpp_headers/bit_array.o 00:08:46.169 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:46.169 CC test/event/reactor_perf/reactor_perf.o 00:08:46.169 LINK reactor 00:08:46.169 CC test/app/histogram_perf/histogram_perf.o 00:08:46.169 CXX test/cpp_headers/bit_pool.o 00:08:46.169 CC test/nvme/reset/reset.o 00:08:46.169 LINK env_dpdk_post_init 00:08:46.447 LINK reactor_perf 00:08:46.447 CC examples/idxd/perf/perf.o 00:08:46.447 LINK histogram_perf 00:08:46.447 CXX test/cpp_headers/blob_bdev.o 00:08:46.447 CC test/nvme/sgl/sgl.o 00:08:46.447 LINK spdk_nvme 00:08:46.447 LINK reset 00:08:46.771 CC test/env/memory/memory_ut.o 00:08:46.771 CC test/event/app_repeat/app_repeat.o 00:08:46.771 CXX test/cpp_headers/blobfs_bdev.o 00:08:46.771 CC app/fio/bdev/fio_plugin.o 00:08:46.771 LINK idxd_perf 00:08:46.771 LINK dif 00:08:46.771 LINK sgl 00:08:46.771 CC test/blobfs/mkfs/mkfs.o 00:08:46.771 LINK app_repeat 00:08:46.771 CC test/env/pci/pci_ut.o 00:08:46.771 CXX test/cpp_headers/blobfs.o 00:08:47.030 CXX test/cpp_headers/blob.o 00:08:47.030 LINK mkfs 00:08:47.287 CC test/nvme/e2edp/nvme_dp.o 00:08:47.287 CC test/event/scheduler/scheduler.o 00:08:47.287 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:47.287 CXX test/cpp_headers/conf.o 00:08:47.287 CC test/lvol/esnap/esnap.o 00:08:47.544 LINK pci_ut 00:08:47.544 LINK spdk_bdev 00:08:47.544 LINK iscsi_fuzz 00:08:47.544 LINK nvme_dp 00:08:47.544 CXX test/cpp_headers/config.o 00:08:47.544 LINK scheduler 00:08:47.544 CXX test/cpp_headers/cpuset.o 00:08:47.544 LINK hello_fsdev 00:08:47.802 CC test/bdev/bdevio/bdevio.o 00:08:47.802 CC app/vhost/vhost.o 00:08:47.802 CXX test/cpp_headers/crc16.o 00:08:47.802 CC test/nvme/overhead/overhead.o 00:08:47.802 CC test/app/jsoncat/jsoncat.o 00:08:47.802 CC test/app/stub/stub.o 00:08:48.062 CC examples/accel/perf/accel_perf.o 00:08:48.062 CXX test/cpp_headers/crc32.o 00:08:48.062 LINK vhost 00:08:48.062 LINK jsoncat 00:08:48.062 CC examples/blob/hello_world/hello_blob.o 00:08:48.062 LINK memory_ut 00:08:48.062 LINK stub 00:08:48.062 CXX test/cpp_headers/crc64.o 00:08:48.320 LINK bdevio 00:08:48.320 LINK overhead 00:08:48.320 CC test/nvme/err_injection/err_injection.o 00:08:48.320 CC test/nvme/startup/startup.o 00:08:48.320 LINK hello_blob 00:08:48.320 CXX test/cpp_headers/dif.o 00:08:48.320 CXX test/cpp_headers/dma.o 00:08:48.320 CC test/nvme/reserve/reserve.o 00:08:48.320 CC test/nvme/simple_copy/simple_copy.o 00:08:48.577 LINK err_injection 00:08:48.577 LINK startup 00:08:48.577 CXX test/cpp_headers/endian.o 00:08:48.577 LINK accel_perf 00:08:48.577 CC examples/nvme/hello_world/hello_world.o 00:08:48.577 CC examples/nvme/reconnect/reconnect.o 00:08:48.577 LINK reserve 00:08:48.833 LINK simple_copy 00:08:48.833 CXX test/cpp_headers/env_dpdk.o 00:08:48.833 CC examples/blob/cli/blobcli.o 00:08:48.833 CC test/nvme/connect_stress/connect_stress.o 00:08:48.833 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:48.833 CC examples/nvme/arbitration/arbitration.o 00:08:48.833 LINK hello_world 00:08:48.833 CXX test/cpp_headers/env.o 00:08:49.092 CC examples/nvme/hotplug/hotplug.o 00:08:49.092 CC test/nvme/boot_partition/boot_partition.o 00:08:49.092 LINK connect_stress 00:08:49.092 CXX test/cpp_headers/event.o 00:08:49.092 LINK reconnect 00:08:49.350 CC test/nvme/compliance/nvme_compliance.o 00:08:49.350 CXX test/cpp_headers/fd_group.o 00:08:49.350 LINK boot_partition 00:08:49.350 LINK hotplug 00:08:49.350 CXX test/cpp_headers/fd.o 00:08:49.350 LINK arbitration 00:08:49.350 LINK blobcli 00:08:49.350 CXX test/cpp_headers/file.o 00:08:49.608 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:49.608 CXX test/cpp_headers/fsdev.o 00:08:49.608 CC test/nvme/fused_ordering/fused_ordering.o 00:08:49.608 LINK nvme_manage 00:08:49.608 CC examples/bdev/hello_world/hello_bdev.o 00:08:49.608 CC examples/bdev/bdevperf/bdevperf.o 00:08:49.608 LINK nvme_compliance 00:08:49.608 CXX test/cpp_headers/fsdev_module.o 00:08:49.608 LINK cmb_copy 00:08:49.608 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:49.867 CC test/nvme/fdp/fdp.o 00:08:49.867 LINK fused_ordering 00:08:49.867 CC examples/nvme/abort/abort.o 00:08:49.867 CC test/nvme/cuse/cuse.o 00:08:49.867 LINK hello_bdev 00:08:49.867 CXX test/cpp_headers/ftl.o 00:08:49.867 LINK doorbell_aers 00:08:49.867 CXX test/cpp_headers/fuse_dispatcher.o 00:08:50.126 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:50.126 CXX test/cpp_headers/gpt_spec.o 00:08:50.126 CXX test/cpp_headers/hexlify.o 00:08:50.126 CXX test/cpp_headers/histogram_data.o 00:08:50.126 CXX test/cpp_headers/idxd.o 00:08:50.126 LINK fdp 00:08:50.126 LINK pmr_persistence 00:08:50.386 CXX test/cpp_headers/idxd_spec.o 00:08:50.386 LINK abort 00:08:50.386 CXX test/cpp_headers/init.o 00:08:50.386 CXX test/cpp_headers/ioat.o 00:08:50.386 CXX test/cpp_headers/ioat_spec.o 00:08:50.386 CXX test/cpp_headers/iscsi_spec.o 00:08:50.386 CXX test/cpp_headers/json.o 00:08:50.386 CXX test/cpp_headers/jsonrpc.o 00:08:50.386 CXX test/cpp_headers/keyring.o 00:08:50.648 CXX test/cpp_headers/keyring_module.o 00:08:50.648 CXX test/cpp_headers/likely.o 00:08:50.648 CXX test/cpp_headers/log.o 00:08:50.648 CXX test/cpp_headers/lvol.o 00:08:50.648 CXX test/cpp_headers/md5.o 00:08:50.648 CXX test/cpp_headers/memory.o 00:08:50.648 CXX test/cpp_headers/mmio.o 00:08:50.648 CXX test/cpp_headers/nbd.o 00:08:50.648 LINK bdevperf 00:08:50.648 CXX test/cpp_headers/net.o 00:08:50.648 CXX test/cpp_headers/notify.o 00:08:50.648 CXX test/cpp_headers/nvme.o 00:08:50.920 CXX test/cpp_headers/nvme_intel.o 00:08:50.920 CXX test/cpp_headers/nvme_ocssd.o 00:08:50.920 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:50.920 CXX test/cpp_headers/nvme_spec.o 00:08:50.920 CXX test/cpp_headers/nvme_zns.o 00:08:50.920 CXX test/cpp_headers/nvmf_cmd.o 00:08:50.920 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:50.920 CXX test/cpp_headers/nvmf.o 00:08:51.179 CXX test/cpp_headers/nvmf_spec.o 00:08:51.179 CXX test/cpp_headers/nvmf_transport.o 00:08:51.179 CXX test/cpp_headers/opal_spec.o 00:08:51.179 CXX test/cpp_headers/opal.o 00:08:51.179 CXX test/cpp_headers/pci_ids.o 00:08:51.179 CXX test/cpp_headers/pipe.o 00:08:51.179 CXX test/cpp_headers/queue.o 00:08:51.179 CC examples/nvmf/nvmf/nvmf.o 00:08:51.179 CXX test/cpp_headers/reduce.o 00:08:51.488 CXX test/cpp_headers/rpc.o 00:08:51.488 CXX test/cpp_headers/scheduler.o 00:08:51.488 CXX test/cpp_headers/scsi.o 00:08:51.488 CXX test/cpp_headers/scsi_spec.o 00:08:51.488 CXX test/cpp_headers/sock.o 00:08:51.488 CXX test/cpp_headers/stdinc.o 00:08:51.488 CXX test/cpp_headers/string.o 00:08:51.488 CXX test/cpp_headers/thread.o 00:08:51.488 LINK cuse 00:08:51.488 CXX test/cpp_headers/trace.o 00:08:51.488 CXX test/cpp_headers/trace_parser.o 00:08:51.488 CXX test/cpp_headers/tree.o 00:08:51.488 CXX test/cpp_headers/ublk.o 00:08:51.488 CXX test/cpp_headers/util.o 00:08:51.749 LINK nvmf 00:08:51.749 CXX test/cpp_headers/uuid.o 00:08:51.749 CXX test/cpp_headers/version.o 00:08:51.749 CXX test/cpp_headers/vfio_user_pci.o 00:08:51.749 CXX test/cpp_headers/vfio_user_spec.o 00:08:51.749 CXX test/cpp_headers/vhost.o 00:08:51.749 CXX test/cpp_headers/vmd.o 00:08:51.749 CXX test/cpp_headers/xor.o 00:08:51.749 CXX test/cpp_headers/zipf.o 00:08:55.040 LINK esnap 00:08:55.300 00:08:55.300 real 1m58.082s 00:08:55.300 user 10m48.942s 00:08:55.300 sys 2m4.597s 00:08:55.300 13:03:42 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:55.300 13:03:42 make -- common/autotest_common.sh@10 -- $ set +x 00:08:55.300 ************************************ 00:08:55.300 END TEST make 00:08:55.300 ************************************ 00:08:55.558 13:03:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:55.558 13:03:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:55.558 13:03:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:55.558 13:03:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.558 13:03:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:55.558 13:03:42 -- pm/common@44 -- $ pid=5254 00:08:55.558 13:03:42 -- pm/common@50 -- $ kill -TERM 5254 00:08:55.558 13:03:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.558 13:03:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:55.558 13:03:42 -- pm/common@44 -- $ pid=5256 00:08:55.558 13:03:42 -- pm/common@50 -- $ kill -TERM 5256 00:08:55.558 13:03:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:55.558 13:03:42 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:55.558 13:03:42 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:55.558 13:03:42 -- common/autotest_common.sh@1711 -- # lcov --version 00:08:55.558 13:03:42 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:55.558 13:03:42 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:55.558 13:03:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.558 13:03:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.558 13:03:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.558 13:03:42 -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.558 13:03:42 -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.558 13:03:42 -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.558 13:03:42 -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.558 13:03:42 -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.558 13:03:42 -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.558 13:03:42 -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.559 13:03:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.559 13:03:42 -- scripts/common.sh@344 -- # case "$op" in 00:08:55.559 13:03:42 -- scripts/common.sh@345 -- # : 1 00:08:55.559 13:03:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.559 13:03:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.559 13:03:42 -- scripts/common.sh@365 -- # decimal 1 00:08:55.559 13:03:42 -- scripts/common.sh@353 -- # local d=1 00:08:55.559 13:03:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.559 13:03:42 -- scripts/common.sh@355 -- # echo 1 00:08:55.559 13:03:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.559 13:03:42 -- scripts/common.sh@366 -- # decimal 2 00:08:55.559 13:03:42 -- scripts/common.sh@353 -- # local d=2 00:08:55.559 13:03:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.559 13:03:42 -- scripts/common.sh@355 -- # echo 2 00:08:55.559 13:03:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.559 13:03:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.559 13:03:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.559 13:03:42 -- scripts/common.sh@368 -- # return 0 00:08:55.559 13:03:42 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.559 13:03:42 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:55.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.559 --rc genhtml_branch_coverage=1 00:08:55.559 --rc genhtml_function_coverage=1 00:08:55.559 --rc genhtml_legend=1 00:08:55.559 --rc geninfo_all_blocks=1 00:08:55.559 --rc geninfo_unexecuted_blocks=1 00:08:55.559 00:08:55.559 ' 00:08:55.559 13:03:42 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:55.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.559 --rc genhtml_branch_coverage=1 00:08:55.559 --rc genhtml_function_coverage=1 00:08:55.559 --rc genhtml_legend=1 00:08:55.559 --rc geninfo_all_blocks=1 00:08:55.559 --rc geninfo_unexecuted_blocks=1 00:08:55.559 00:08:55.559 ' 00:08:55.559 13:03:42 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:55.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.559 --rc genhtml_branch_coverage=1 00:08:55.559 --rc genhtml_function_coverage=1 00:08:55.559 --rc genhtml_legend=1 00:08:55.559 --rc geninfo_all_blocks=1 00:08:55.559 --rc geninfo_unexecuted_blocks=1 00:08:55.559 00:08:55.559 ' 00:08:55.559 13:03:42 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:55.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.559 --rc genhtml_branch_coverage=1 00:08:55.559 --rc genhtml_function_coverage=1 00:08:55.559 --rc genhtml_legend=1 00:08:55.559 --rc geninfo_all_blocks=1 00:08:55.559 --rc geninfo_unexecuted_blocks=1 00:08:55.559 00:08:55.559 ' 00:08:55.559 13:03:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:55.559 13:03:42 -- nvmf/common.sh@7 -- # uname -s 00:08:55.559 13:03:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.559 13:03:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.559 13:03:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.559 13:03:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.559 13:03:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.559 13:03:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.559 13:03:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.559 13:03:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.559 13:03:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.559 13:03:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.818 13:03:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7f176c8d-8b6b-4f89-9f07-a020c6485b6a 00:08:55.818 13:03:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=7f176c8d-8b6b-4f89-9f07-a020c6485b6a 00:08:55.818 13:03:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.818 13:03:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.818 13:03:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:55.818 13:03:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.818 13:03:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.818 13:03:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.818 13:03:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.818 13:03:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.818 13:03:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.818 13:03:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.819 13:03:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.819 13:03:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.819 13:03:42 -- paths/export.sh@5 -- # export PATH 00:08:55.819 13:03:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.819 13:03:42 -- nvmf/common.sh@51 -- # : 0 00:08:55.819 13:03:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.819 13:03:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.819 13:03:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.819 13:03:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.819 13:03:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.819 13:03:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.819 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.819 13:03:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.819 13:03:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.819 13:03:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.819 13:03:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:55.819 13:03:42 -- spdk/autotest.sh@32 -- # uname -s 00:08:55.819 13:03:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:55.819 13:03:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:55.819 13:03:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:55.819 13:03:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:55.819 13:03:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:55.819 13:03:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:55.819 13:03:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:55.819 13:03:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:55.819 13:03:42 -- spdk/autotest.sh@48 -- # udevadm_pid=54520 00:08:55.819 13:03:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:55.819 13:03:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:55.819 13:03:42 -- pm/common@17 -- # local monitor 00:08:55.819 13:03:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.819 13:03:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:55.819 13:03:42 -- pm/common@25 -- # sleep 1 00:08:55.819 13:03:42 -- pm/common@21 -- # date +%s 00:08:55.819 13:03:42 -- pm/common@21 -- # date +%s 00:08:55.819 13:03:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490222 00:08:55.819 13:03:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733490222 00:08:55.819 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490222_collect-cpu-load.pm.log 00:08:55.819 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733490222_collect-vmstat.pm.log 00:08:56.755 13:03:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:56.755 13:03:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:56.755 13:03:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.755 13:03:43 -- common/autotest_common.sh@10 -- # set +x 00:08:56.755 13:03:43 -- spdk/autotest.sh@59 -- # create_test_list 00:08:56.755 13:03:43 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:56.755 13:03:43 -- common/autotest_common.sh@10 -- # set +x 00:08:56.755 13:03:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:56.755 13:03:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:56.755 13:03:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:56.755 13:03:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:56.755 13:03:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:56.755 13:03:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:56.755 13:03:43 -- common/autotest_common.sh@1457 -- # uname 00:08:56.755 13:03:43 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:56.755 13:03:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:56.755 13:03:43 -- common/autotest_common.sh@1477 -- # uname 00:08:56.755 13:03:43 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:56.755 13:03:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:56.755 13:03:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:57.014 lcov: LCOV version 1.15 00:08:57.014 13:03:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:09:15.102 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:15.102 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:33.180 13:04:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:33.180 13:04:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.180 13:04:17 -- common/autotest_common.sh@10 -- # set +x 00:09:33.180 13:04:17 -- spdk/autotest.sh@78 -- # rm -f 00:09:33.180 13:04:17 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:33.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.180 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:33.180 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:33.180 13:04:18 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:33.180 13:04:18 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:33.180 13:04:18 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:33.180 13:04:18 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:09:33.181 13:04:18 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:09:33.181 13:04:18 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:09:33.181 13:04:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:33.181 13:04:18 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:09:33.181 13:04:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:33.181 13:04:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:09:33.181 13:04:18 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:33.181 13:04:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:33.181 13:04:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:33.181 13:04:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:33.181 13:04:18 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:09:33.181 13:04:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:33.181 13:04:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:09:33.181 13:04:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:09:33.181 13:04:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:33.181 13:04:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:33.181 13:04:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:33.181 13:04:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:09:33.181 13:04:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:09:33.181 13:04:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:33.181 13:04:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:33.181 13:04:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:33.181 13:04:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:09:33.181 13:04:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:09:33.181 13:04:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:33.181 13:04:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:33.181 13:04:18 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:33.181 13:04:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:33.181 13:04:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:33.181 13:04:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:33.181 13:04:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:33.181 13:04:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:33.181 No valid GPT data, bailing 00:09:33.181 13:04:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:33.181 13:04:18 -- scripts/common.sh@394 -- # pt= 00:09:33.181 13:04:18 -- scripts/common.sh@395 -- # return 1 00:09:33.181 13:04:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:33.181 1+0 records in 00:09:33.181 1+0 records out 00:09:33.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512226 s, 205 MB/s 00:09:33.181 13:04:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:33.181 13:04:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:33.181 13:04:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:09:33.181 13:04:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:09:33.181 13:04:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:33.181 No valid GPT data, bailing 00:09:33.181 13:04:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:33.181 13:04:18 -- scripts/common.sh@394 -- # pt= 00:09:33.181 13:04:18 -- scripts/common.sh@395 -- # return 1 00:09:33.181 13:04:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:33.181 1+0 records in 00:09:33.181 1+0 records out 00:09:33.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00376314 s, 279 MB/s 00:09:33.181 13:04:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:33.181 13:04:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:33.181 13:04:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:09:33.181 13:04:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:09:33.181 13:04:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:09:33.181 No valid GPT data, bailing 00:09:33.181 13:04:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:09:33.181 13:04:18 -- scripts/common.sh@394 -- # pt= 00:09:33.181 13:04:18 -- scripts/common.sh@395 -- # return 1 00:09:33.181 13:04:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:09:33.181 1+0 records in 00:09:33.181 1+0 records out 00:09:33.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446653 s, 235 MB/s 00:09:33.181 13:04:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:33.181 13:04:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:33.181 13:04:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:09:33.181 13:04:18 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:09:33.181 13:04:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:09:33.181 No valid GPT data, bailing 00:09:33.181 13:04:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:09:33.181 13:04:18 -- scripts/common.sh@394 -- # pt= 00:09:33.181 13:04:18 -- scripts/common.sh@395 -- # return 1 00:09:33.181 13:04:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:09:33.181 1+0 records in 00:09:33.181 1+0 records out 00:09:33.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557625 s, 188 MB/s 00:09:33.181 13:04:18 -- spdk/autotest.sh@105 -- # sync 00:09:33.181 13:04:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:33.181 13:04:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:33.181 13:04:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:34.562 13:04:21 -- spdk/autotest.sh@111 -- # uname -s 00:09:34.562 13:04:21 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:34.562 13:04:21 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:34.562 13:04:21 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:34.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.821 Hugepages 00:09:34.821 node hugesize free / total 00:09:34.821 node0 1048576kB 0 / 0 00:09:34.821 node0 2048kB 0 / 0 00:09:34.821 00:09:34.821 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:35.080 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:35.080 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:35.080 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:09:35.080 13:04:22 -- spdk/autotest.sh@117 -- # uname -s 00:09:35.080 13:04:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:35.080 13:04:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:35.080 13:04:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:36.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:36.019 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.019 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:36.019 13:04:22 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:37.397 13:04:23 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:37.397 13:04:23 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:37.397 13:04:23 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:37.397 13:04:23 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:37.397 13:04:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:37.397 13:04:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:37.397 13:04:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:37.397 13:04:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:37.397 13:04:23 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:37.397 13:04:24 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:09:37.397 13:04:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:37.397 13:04:24 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:37.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.656 Waiting for block devices as requested 00:09:37.656 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.656 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.915 13:04:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:37.915 13:04:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:37.915 13:04:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:37.915 13:04:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:09:37.915 13:04:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:37.915 13:04:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:37.915 13:04:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:37.915 13:04:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:09:37.915 13:04:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:09:37.915 13:04:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:09:37.915 13:04:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:09:37.915 13:04:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:37.915 13:04:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:37.915 13:04:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:37.915 13:04:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:37.915 13:04:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:37.915 13:04:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:09:37.915 13:04:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:37.915 13:04:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:37.915 13:04:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:37.915 13:04:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:37.915 13:04:24 -- common/autotest_common.sh@1543 -- # continue 00:09:37.915 13:04:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:37.915 13:04:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:37.915 13:04:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:09:37.915 13:04:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:37.915 13:04:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:37.915 13:04:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:37.915 13:04:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:37.915 13:04:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:37.915 13:04:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:37.915 13:04:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:37.915 13:04:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:37.915 13:04:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:37.915 13:04:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:37.915 13:04:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:37.915 13:04:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:37.915 13:04:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:37.915 13:04:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:37.915 13:04:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:37.915 13:04:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:37.915 13:04:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:37.915 13:04:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:37.915 13:04:24 -- common/autotest_common.sh@1543 -- # continue 00:09:37.915 13:04:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:37.915 13:04:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:37.915 13:04:24 -- common/autotest_common.sh@10 -- # set +x 00:09:37.915 13:04:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:37.915 13:04:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.915 13:04:24 -- common/autotest_common.sh@10 -- # set +x 00:09:37.915 13:04:24 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:38.534 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.813 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.813 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.813 13:04:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:38.813 13:04:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.813 13:04:25 -- common/autotest_common.sh@10 -- # set +x 00:09:38.813 13:04:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:38.813 13:04:25 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:38.813 13:04:25 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:38.813 13:04:25 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:38.813 13:04:25 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:38.813 13:04:25 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:38.813 13:04:25 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:38.813 13:04:25 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:38.813 13:04:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:38.813 13:04:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:38.813 13:04:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.813 13:04:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.813 13:04:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:38.813 13:04:25 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:09:38.813 13:04:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:38.813 13:04:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:38.813 13:04:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:38.813 13:04:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:38.813 13:04:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:38.813 13:04:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:38.813 13:04:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:39.072 13:04:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:39.072 13:04:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:39.072 13:04:25 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:09:39.072 13:04:25 -- common/autotest_common.sh@1572 -- # return 0 00:09:39.072 13:04:25 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:09:39.072 13:04:25 -- common/autotest_common.sh@1580 -- # return 0 00:09:39.072 13:04:25 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:39.072 13:04:25 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:39.072 13:04:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:39.072 13:04:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:39.072 13:04:25 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:39.072 13:04:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.072 13:04:25 -- common/autotest_common.sh@10 -- # set +x 00:09:39.073 13:04:25 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:39.073 13:04:25 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:39.073 13:04:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.073 13:04:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.073 13:04:25 -- common/autotest_common.sh@10 -- # set +x 00:09:39.073 ************************************ 00:09:39.073 START TEST env 00:09:39.073 ************************************ 00:09:39.073 13:04:25 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:39.073 * Looking for test storage... 00:09:39.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:39.073 13:04:25 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.073 13:04:25 env -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.073 13:04:25 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.073 13:04:26 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.073 13:04:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.073 13:04:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.073 13:04:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.073 13:04:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.073 13:04:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.073 13:04:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.073 13:04:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.073 13:04:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.073 13:04:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.073 13:04:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.073 13:04:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.073 13:04:26 env -- scripts/common.sh@344 -- # case "$op" in 00:09:39.073 13:04:26 env -- scripts/common.sh@345 -- # : 1 00:09:39.073 13:04:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.073 13:04:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.073 13:04:26 env -- scripts/common.sh@365 -- # decimal 1 00:09:39.073 13:04:26 env -- scripts/common.sh@353 -- # local d=1 00:09:39.073 13:04:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.073 13:04:26 env -- scripts/common.sh@355 -- # echo 1 00:09:39.073 13:04:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.073 13:04:26 env -- scripts/common.sh@366 -- # decimal 2 00:09:39.073 13:04:26 env -- scripts/common.sh@353 -- # local d=2 00:09:39.073 13:04:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.073 13:04:26 env -- scripts/common.sh@355 -- # echo 2 00:09:39.073 13:04:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.073 13:04:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.073 13:04:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.073 13:04:26 env -- scripts/common.sh@368 -- # return 0 00:09:39.073 13:04:26 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.073 13:04:26 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:39.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.073 --rc genhtml_branch_coverage=1 00:09:39.073 --rc genhtml_function_coverage=1 00:09:39.073 --rc genhtml_legend=1 00:09:39.073 --rc geninfo_all_blocks=1 00:09:39.073 --rc geninfo_unexecuted_blocks=1 00:09:39.073 00:09:39.073 ' 00:09:39.073 13:04:26 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:39.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.073 --rc genhtml_branch_coverage=1 00:09:39.073 --rc genhtml_function_coverage=1 00:09:39.073 --rc genhtml_legend=1 00:09:39.073 --rc geninfo_all_blocks=1 00:09:39.073 --rc geninfo_unexecuted_blocks=1 00:09:39.073 00:09:39.073 ' 00:09:39.073 13:04:26 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:39.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.073 --rc genhtml_branch_coverage=1 00:09:39.073 --rc genhtml_function_coverage=1 00:09:39.073 --rc genhtml_legend=1 00:09:39.073 --rc geninfo_all_blocks=1 00:09:39.073 --rc geninfo_unexecuted_blocks=1 00:09:39.073 00:09:39.073 ' 00:09:39.073 13:04:26 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:39.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.073 --rc genhtml_branch_coverage=1 00:09:39.073 --rc genhtml_function_coverage=1 00:09:39.073 --rc genhtml_legend=1 00:09:39.073 --rc geninfo_all_blocks=1 00:09:39.073 --rc geninfo_unexecuted_blocks=1 00:09:39.073 00:09:39.073 ' 00:09:39.073 13:04:26 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:39.073 13:04:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.073 13:04:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.073 13:04:26 env -- common/autotest_common.sh@10 -- # set +x 00:09:39.073 ************************************ 00:09:39.073 START TEST env_memory 00:09:39.073 ************************************ 00:09:39.073 13:04:26 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:39.331 00:09:39.331 00:09:39.331 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.331 http://cunit.sourceforge.net/ 00:09:39.331 00:09:39.331 00:09:39.331 Suite: memory 00:09:39.331 Test: alloc and free memory map ...[2024-12-06 13:04:26.158424] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:39.331 passed 00:09:39.331 Test: mem map translation ...[2024-12-06 13:04:26.220599] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:39.331 [2024-12-06 13:04:26.220741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:39.331 [2024-12-06 13:04:26.220859] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:39.331 [2024-12-06 13:04:26.220895] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:39.331 passed 00:09:39.331 Test: mem map registration ...[2024-12-06 13:04:26.320004] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:39.331 [2024-12-06 13:04:26.320116] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:39.588 passed 00:09:39.588 Test: mem map adjacent registrations ...passed 00:09:39.588 00:09:39.588 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.588 suites 1 1 n/a 0 0 00:09:39.588 tests 4 4 4 0 0 00:09:39.588 asserts 152 152 152 0 n/a 00:09:39.588 00:09:39.588 Elapsed time = 0.335 seconds 00:09:39.588 00:09:39.588 real 0m0.380s 00:09:39.588 user 0m0.344s 00:09:39.588 sys 0m0.026s 00:09:39.588 13:04:26 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.588 ************************************ 00:09:39.588 END TEST env_memory 00:09:39.588 ************************************ 00:09:39.588 13:04:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:39.588 13:04:26 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:39.588 13:04:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.588 13:04:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.588 13:04:26 env -- common/autotest_common.sh@10 -- # set +x 00:09:39.588 ************************************ 00:09:39.588 START TEST env_vtophys 00:09:39.588 ************************************ 00:09:39.588 13:04:26 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:39.588 EAL: lib.eal log level changed from notice to debug 00:09:39.588 EAL: Detected lcore 0 as core 0 on socket 0 00:09:39.588 EAL: Detected lcore 1 as core 0 on socket 0 00:09:39.588 EAL: Detected lcore 2 as core 0 on socket 0 00:09:39.588 EAL: Detected lcore 3 as core 0 on socket 0 00:09:39.588 EAL: Detected lcore 4 as core 0 on socket 0 00:09:39.588 EAL: Detected lcore 5 as core 0 on socket 0 00:09:39.588 EAL: Detected lcore 6 as core 0 on socket 0 00:09:39.589 EAL: Detected lcore 7 as core 0 on socket 0 00:09:39.589 EAL: Detected lcore 8 as core 0 on socket 0 00:09:39.589 EAL: Detected lcore 9 as core 0 on socket 0 00:09:39.589 EAL: Maximum logical cores by configuration: 128 00:09:39.589 EAL: Detected CPU lcores: 10 00:09:39.589 EAL: Detected NUMA nodes: 1 00:09:39.589 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:39.589 EAL: Detected shared linkage of DPDK 00:09:39.847 EAL: No shared files mode enabled, IPC will be disabled 00:09:39.847 EAL: Selected IOVA mode 'PA' 00:09:39.847 EAL: Probing VFIO support... 00:09:39.847 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:39.847 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:39.847 EAL: Ask a virtual area of 0x2e000 bytes 00:09:39.847 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:39.847 EAL: Setting up physically contiguous memory... 00:09:39.847 EAL: Setting maximum number of open files to 524288 00:09:39.847 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:39.847 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:39.847 EAL: Ask a virtual area of 0x61000 bytes 00:09:39.847 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:39.847 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:39.847 EAL: Ask a virtual area of 0x400000000 bytes 00:09:39.847 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:39.847 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:39.847 EAL: Ask a virtual area of 0x61000 bytes 00:09:39.847 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:39.847 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:39.847 EAL: Ask a virtual area of 0x400000000 bytes 00:09:39.847 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:39.847 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:39.847 EAL: Ask a virtual area of 0x61000 bytes 00:09:39.847 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:39.847 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:39.847 EAL: Ask a virtual area of 0x400000000 bytes 00:09:39.847 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:39.847 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:39.847 EAL: Ask a virtual area of 0x61000 bytes 00:09:39.847 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:39.847 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:39.847 EAL: Ask a virtual area of 0x400000000 bytes 00:09:39.847 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:39.847 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:39.847 EAL: Hugepages will be freed exactly as allocated. 00:09:39.847 EAL: No shared files mode enabled, IPC is disabled 00:09:39.847 EAL: No shared files mode enabled, IPC is disabled 00:09:39.847 EAL: TSC frequency is ~2200000 KHz 00:09:39.847 EAL: Main lcore 0 is ready (tid=7f411441ba40;cpuset=[0]) 00:09:39.847 EAL: Trying to obtain current memory policy. 00:09:39.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:39.847 EAL: Restoring previous memory policy: 0 00:09:39.847 EAL: request: mp_malloc_sync 00:09:39.847 EAL: No shared files mode enabled, IPC is disabled 00:09:39.847 EAL: Heap on socket 0 was expanded by 2MB 00:09:39.847 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:39.847 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:39.847 EAL: Mem event callback 'spdk:(nil)' registered 00:09:39.847 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:39.847 00:09:39.847 00:09:39.847 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.847 http://cunit.sourceforge.net/ 00:09:39.847 00:09:39.847 00:09:39.847 Suite: components_suite 00:09:40.413 Test: vtophys_malloc_test ...passed 00:09:40.413 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:40.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:40.413 EAL: Restoring previous memory policy: 4 00:09:40.413 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.413 EAL: request: mp_malloc_sync 00:09:40.413 EAL: No shared files mode enabled, IPC is disabled 00:09:40.413 EAL: Heap on socket 0 was expanded by 4MB 00:09:40.413 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.413 EAL: request: mp_malloc_sync 00:09:40.413 EAL: No shared files mode enabled, IPC is disabled 00:09:40.413 EAL: Heap on socket 0 was shrunk by 4MB 00:09:40.413 EAL: Trying to obtain current memory policy. 00:09:40.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:40.413 EAL: Restoring previous memory policy: 4 00:09:40.413 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.413 EAL: request: mp_malloc_sync 00:09:40.413 EAL: No shared files mode enabled, IPC is disabled 00:09:40.413 EAL: Heap on socket 0 was expanded by 6MB 00:09:40.413 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.413 EAL: request: mp_malloc_sync 00:09:40.413 EAL: No shared files mode enabled, IPC is disabled 00:09:40.413 EAL: Heap on socket 0 was shrunk by 6MB 00:09:40.413 EAL: Trying to obtain current memory policy. 00:09:40.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:40.413 EAL: Restoring previous memory policy: 4 00:09:40.413 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.413 EAL: request: mp_malloc_sync 00:09:40.413 EAL: No shared files mode enabled, IPC is disabled 00:09:40.413 EAL: Heap on socket 0 was expanded by 10MB 00:09:40.413 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.413 EAL: request: mp_malloc_sync 00:09:40.413 EAL: No shared files mode enabled, IPC is disabled 00:09:40.413 EAL: Heap on socket 0 was shrunk by 10MB 00:09:40.413 EAL: Trying to obtain current memory policy. 00:09:40.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:40.413 EAL: Restoring previous memory policy: 4 00:09:40.413 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.413 EAL: request: mp_malloc_sync 00:09:40.413 EAL: No shared files mode enabled, IPC is disabled 00:09:40.413 EAL: Heap on socket 0 was expanded by 18MB 00:09:40.413 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.413 EAL: request: mp_malloc_sync 00:09:40.413 EAL: No shared files mode enabled, IPC is disabled 00:09:40.413 EAL: Heap on socket 0 was shrunk by 18MB 00:09:40.413 EAL: Trying to obtain current memory policy. 00:09:40.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:40.413 EAL: Restoring previous memory policy: 4 00:09:40.413 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.413 EAL: request: mp_malloc_sync 00:09:40.413 EAL: No shared files mode enabled, IPC is disabled 00:09:40.413 EAL: Heap on socket 0 was expanded by 34MB 00:09:40.672 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.672 EAL: request: mp_malloc_sync 00:09:40.672 EAL: No shared files mode enabled, IPC is disabled 00:09:40.672 EAL: Heap on socket 0 was shrunk by 34MB 00:09:40.672 EAL: Trying to obtain current memory policy. 00:09:40.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:40.672 EAL: Restoring previous memory policy: 4 00:09:40.672 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.672 EAL: request: mp_malloc_sync 00:09:40.672 EAL: No shared files mode enabled, IPC is disabled 00:09:40.672 EAL: Heap on socket 0 was expanded by 66MB 00:09:40.672 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.672 EAL: request: mp_malloc_sync 00:09:40.672 EAL: No shared files mode enabled, IPC is disabled 00:09:40.672 EAL: Heap on socket 0 was shrunk by 66MB 00:09:40.931 EAL: Trying to obtain current memory policy. 00:09:40.931 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:40.931 EAL: Restoring previous memory policy: 4 00:09:40.931 EAL: Calling mem event callback 'spdk:(nil)' 00:09:40.931 EAL: request: mp_malloc_sync 00:09:40.931 EAL: No shared files mode enabled, IPC is disabled 00:09:40.931 EAL: Heap on socket 0 was expanded by 130MB 00:09:41.189 EAL: Calling mem event callback 'spdk:(nil)' 00:09:41.189 EAL: request: mp_malloc_sync 00:09:41.189 EAL: No shared files mode enabled, IPC is disabled 00:09:41.189 EAL: Heap on socket 0 was shrunk by 130MB 00:09:41.448 EAL: Trying to obtain current memory policy. 00:09:41.448 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:41.448 EAL: Restoring previous memory policy: 4 00:09:41.448 EAL: Calling mem event callback 'spdk:(nil)' 00:09:41.448 EAL: request: mp_malloc_sync 00:09:41.448 EAL: No shared files mode enabled, IPC is disabled 00:09:41.448 EAL: Heap on socket 0 was expanded by 258MB 00:09:42.015 EAL: Calling mem event callback 'spdk:(nil)' 00:09:42.015 EAL: request: mp_malloc_sync 00:09:42.015 EAL: No shared files mode enabled, IPC is disabled 00:09:42.015 EAL: Heap on socket 0 was shrunk by 258MB 00:09:42.273 EAL: Trying to obtain current memory policy. 00:09:42.273 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:42.531 EAL: Restoring previous memory policy: 4 00:09:42.531 EAL: Calling mem event callback 'spdk:(nil)' 00:09:42.531 EAL: request: mp_malloc_sync 00:09:42.531 EAL: No shared files mode enabled, IPC is disabled 00:09:42.531 EAL: Heap on socket 0 was expanded by 514MB 00:09:43.467 EAL: Calling mem event callback 'spdk:(nil)' 00:09:43.467 EAL: request: mp_malloc_sync 00:09:43.467 EAL: No shared files mode enabled, IPC is disabled 00:09:43.467 EAL: Heap on socket 0 was shrunk by 514MB 00:09:44.035 EAL: Trying to obtain current memory policy. 00:09:44.035 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:44.603 EAL: Restoring previous memory policy: 4 00:09:44.603 EAL: Calling mem event callback 'spdk:(nil)' 00:09:44.603 EAL: request: mp_malloc_sync 00:09:44.603 EAL: No shared files mode enabled, IPC is disabled 00:09:44.603 EAL: Heap on socket 0 was expanded by 1026MB 00:09:46.504 EAL: Calling mem event callback 'spdk:(nil)' 00:09:46.504 EAL: request: mp_malloc_sync 00:09:46.504 EAL: No shared files mode enabled, IPC is disabled 00:09:46.504 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:47.947 passed 00:09:47.947 00:09:47.947 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.947 suites 1 1 n/a 0 0 00:09:47.947 tests 2 2 2 0 0 00:09:47.947 asserts 5635 5635 5635 0 n/a 00:09:47.947 00:09:47.947 Elapsed time = 7.996 seconds 00:09:47.947 EAL: Calling mem event callback 'spdk:(nil)' 00:09:47.947 EAL: request: mp_malloc_sync 00:09:47.947 EAL: No shared files mode enabled, IPC is disabled 00:09:47.947 EAL: Heap on socket 0 was shrunk by 2MB 00:09:47.947 EAL: No shared files mode enabled, IPC is disabled 00:09:47.947 EAL: No shared files mode enabled, IPC is disabled 00:09:47.947 EAL: No shared files mode enabled, IPC is disabled 00:09:47.948 00:09:47.948 real 0m8.360s 00:09:47.948 user 0m7.051s 00:09:47.948 sys 0m1.134s 00:09:47.948 13:04:34 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.948 13:04:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:47.948 ************************************ 00:09:47.948 END TEST env_vtophys 00:09:47.948 ************************************ 00:09:47.948 13:04:34 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:47.948 13:04:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.948 13:04:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.948 13:04:34 env -- common/autotest_common.sh@10 -- # set +x 00:09:47.948 ************************************ 00:09:47.948 START TEST env_pci 00:09:47.948 ************************************ 00:09:47.948 13:04:34 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:48.207 00:09:48.207 00:09:48.207 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.207 http://cunit.sourceforge.net/ 00:09:48.207 00:09:48.207 00:09:48.207 Suite: pci 00:09:48.207 Test: pci_hook ...[2024-12-06 13:04:34.971539] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56858 has claimed it 00:09:48.207 passed 00:09:48.207 00:09:48.207 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.207 suites 1 1 n/a 0 0 00:09:48.207 tests 1 1 1 0 0 00:09:48.207 asserts 25 25 25 0 n/a 00:09:48.207 00:09:48.207 Elapsed time = 0.008 seconds 00:09:48.207 EAL: Cannot find device (10000:00:01.0) 00:09:48.207 EAL: Failed to attach device on primary process 00:09:48.207 00:09:48.207 real 0m0.085s 00:09:48.207 user 0m0.039s 00:09:48.207 sys 0m0.044s 00:09:48.207 13:04:35 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.207 13:04:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:48.207 ************************************ 00:09:48.207 END TEST env_pci 00:09:48.207 ************************************ 00:09:48.207 13:04:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:48.207 13:04:35 env -- env/env.sh@15 -- # uname 00:09:48.207 13:04:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:48.207 13:04:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:48.207 13:04:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:48.207 13:04:35 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:48.207 13:04:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.207 13:04:35 env -- common/autotest_common.sh@10 -- # set +x 00:09:48.207 ************************************ 00:09:48.207 START TEST env_dpdk_post_init 00:09:48.207 ************************************ 00:09:48.207 13:04:35 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:48.207 EAL: Detected CPU lcores: 10 00:09:48.207 EAL: Detected NUMA nodes: 1 00:09:48.207 EAL: Detected shared linkage of DPDK 00:09:48.207 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:48.207 EAL: Selected IOVA mode 'PA' 00:09:48.464 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:48.464 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:48.464 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:48.465 Starting DPDK initialization... 00:09:48.465 Starting SPDK post initialization... 00:09:48.465 SPDK NVMe probe 00:09:48.465 Attaching to 0000:00:10.0 00:09:48.465 Attaching to 0000:00:11.0 00:09:48.465 Attached to 0000:00:10.0 00:09:48.465 Attached to 0000:00:11.0 00:09:48.465 Cleaning up... 00:09:48.465 00:09:48.465 real 0m0.327s 00:09:48.465 user 0m0.111s 00:09:48.465 sys 0m0.115s 00:09:48.465 13:04:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.465 13:04:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:48.465 ************************************ 00:09:48.465 END TEST env_dpdk_post_init 00:09:48.465 ************************************ 00:09:48.465 13:04:35 env -- env/env.sh@26 -- # uname 00:09:48.465 13:04:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:48.465 13:04:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:48.465 13:04:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.465 13:04:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.465 13:04:35 env -- common/autotest_common.sh@10 -- # set +x 00:09:48.465 ************************************ 00:09:48.465 START TEST env_mem_callbacks 00:09:48.465 ************************************ 00:09:48.465 13:04:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:48.722 EAL: Detected CPU lcores: 10 00:09:48.722 EAL: Detected NUMA nodes: 1 00:09:48.722 EAL: Detected shared linkage of DPDK 00:09:48.722 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:48.722 EAL: Selected IOVA mode 'PA' 00:09:48.722 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:48.722 00:09:48.722 00:09:48.722 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.722 http://cunit.sourceforge.net/ 00:09:48.722 00:09:48.722 00:09:48.722 Suite: memory 00:09:48.722 Test: test ... 00:09:48.722 register 0x200000200000 2097152 00:09:48.722 malloc 3145728 00:09:48.722 register 0x200000400000 4194304 00:09:48.722 buf 0x2000004fffc0 len 3145728 PASSED 00:09:48.722 malloc 64 00:09:48.722 buf 0x2000004ffec0 len 64 PASSED 00:09:48.722 malloc 4194304 00:09:48.722 register 0x200000800000 6291456 00:09:48.722 buf 0x2000009fffc0 len 4194304 PASSED 00:09:48.722 free 0x2000004fffc0 3145728 00:09:48.722 free 0x2000004ffec0 64 00:09:48.722 unregister 0x200000400000 4194304 PASSED 00:09:48.722 free 0x2000009fffc0 4194304 00:09:48.722 unregister 0x200000800000 6291456 PASSED 00:09:48.722 malloc 8388608 00:09:48.722 register 0x200000400000 10485760 00:09:48.722 buf 0x2000005fffc0 len 8388608 PASSED 00:09:48.722 free 0x2000005fffc0 8388608 00:09:48.722 unregister 0x200000400000 10485760 PASSED 00:09:48.722 passed 00:09:48.722 00:09:48.722 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.722 suites 1 1 n/a 0 0 00:09:48.723 tests 1 1 1 0 0 00:09:48.723 asserts 15 15 15 0 n/a 00:09:48.723 00:09:48.723 Elapsed time = 0.076 seconds 00:09:48.980 00:09:48.980 real 0m0.281s 00:09:48.980 user 0m0.112s 00:09:48.980 sys 0m0.067s 00:09:48.980 13:04:35 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.980 13:04:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:48.980 ************************************ 00:09:48.980 END TEST env_mem_callbacks 00:09:48.980 ************************************ 00:09:48.980 00:09:48.980 real 0m9.939s 00:09:48.980 user 0m7.895s 00:09:48.980 sys 0m1.638s 00:09:48.980 13:04:35 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.980 13:04:35 env -- common/autotest_common.sh@10 -- # set +x 00:09:48.980 ************************************ 00:09:48.980 END TEST env 00:09:48.980 ************************************ 00:09:48.980 13:04:35 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:48.980 13:04:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.980 13:04:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.980 13:04:35 -- common/autotest_common.sh@10 -- # set +x 00:09:48.980 ************************************ 00:09:48.980 START TEST rpc 00:09:48.980 ************************************ 00:09:48.980 13:04:35 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:48.980 * Looking for test storage... 00:09:48.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:48.980 13:04:35 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:48.980 13:04:35 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:48.980 13:04:35 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:49.239 13:04:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.239 13:04:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.239 13:04:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.239 13:04:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.239 13:04:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.239 13:04:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.239 13:04:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.239 13:04:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.239 13:04:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.239 13:04:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.239 13:04:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.239 13:04:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:49.239 13:04:36 rpc -- scripts/common.sh@345 -- # : 1 00:09:49.239 13:04:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.239 13:04:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.239 13:04:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:49.239 13:04:36 rpc -- scripts/common.sh@353 -- # local d=1 00:09:49.239 13:04:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.239 13:04:36 rpc -- scripts/common.sh@355 -- # echo 1 00:09:49.239 13:04:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.239 13:04:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:49.239 13:04:36 rpc -- scripts/common.sh@353 -- # local d=2 00:09:49.239 13:04:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.239 13:04:36 rpc -- scripts/common.sh@355 -- # echo 2 00:09:49.239 13:04:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.239 13:04:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.239 13:04:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.239 13:04:36 rpc -- scripts/common.sh@368 -- # return 0 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:49.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.239 --rc genhtml_branch_coverage=1 00:09:49.239 --rc genhtml_function_coverage=1 00:09:49.239 --rc genhtml_legend=1 00:09:49.239 --rc geninfo_all_blocks=1 00:09:49.239 --rc geninfo_unexecuted_blocks=1 00:09:49.239 00:09:49.239 ' 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:49.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.239 --rc genhtml_branch_coverage=1 00:09:49.239 --rc genhtml_function_coverage=1 00:09:49.239 --rc genhtml_legend=1 00:09:49.239 --rc geninfo_all_blocks=1 00:09:49.239 --rc geninfo_unexecuted_blocks=1 00:09:49.239 00:09:49.239 ' 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:49.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.239 --rc genhtml_branch_coverage=1 00:09:49.239 --rc genhtml_function_coverage=1 00:09:49.239 --rc genhtml_legend=1 00:09:49.239 --rc geninfo_all_blocks=1 00:09:49.239 --rc geninfo_unexecuted_blocks=1 00:09:49.239 00:09:49.239 ' 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:49.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.239 --rc genhtml_branch_coverage=1 00:09:49.239 --rc genhtml_function_coverage=1 00:09:49.239 --rc genhtml_legend=1 00:09:49.239 --rc geninfo_all_blocks=1 00:09:49.239 --rc geninfo_unexecuted_blocks=1 00:09:49.239 00:09:49.239 ' 00:09:49.239 13:04:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56991 00:09:49.239 13:04:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:49.239 13:04:36 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:49.239 13:04:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56991 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@835 -- # '[' -z 56991 ']' 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.239 13:04:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.239 [2024-12-06 13:04:36.165026] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:09:49.239 [2024-12-06 13:04:36.165226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56991 ] 00:09:49.497 [2024-12-06 13:04:36.358984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.755 [2024-12-06 13:04:36.515059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:49.755 [2024-12-06 13:04:36.515150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56991' to capture a snapshot of events at runtime. 00:09:49.755 [2024-12-06 13:04:36.515181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.755 [2024-12-06 13:04:36.515199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.755 [2024-12-06 13:04:36.515214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56991 for offline analysis/debug. 00:09:49.756 [2024-12-06 13:04:36.516765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.689 13:04:37 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.689 13:04:37 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:50.689 13:04:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:50.689 13:04:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:50.689 13:04:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:50.689 13:04:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:50.689 13:04:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.689 13:04:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.689 13:04:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.689 ************************************ 00:09:50.689 START TEST rpc_integrity 00:09:50.689 ************************************ 00:09:50.689 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:50.689 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:50.689 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.689 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.689 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.689 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:50.689 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:50.689 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:50.689 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:50.689 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.689 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.689 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.689 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:50.689 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:50.689 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.689 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.689 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.689 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:50.689 { 00:09:50.689 "name": "Malloc0", 00:09:50.689 "aliases": [ 00:09:50.689 "60b06bb3-9473-4498-b610-8712bdb3a5a0" 00:09:50.689 ], 00:09:50.689 "product_name": "Malloc disk", 00:09:50.689 "block_size": 512, 00:09:50.690 "num_blocks": 16384, 00:09:50.690 "uuid": "60b06bb3-9473-4498-b610-8712bdb3a5a0", 00:09:50.690 "assigned_rate_limits": { 00:09:50.690 "rw_ios_per_sec": 0, 00:09:50.690 "rw_mbytes_per_sec": 0, 00:09:50.690 "r_mbytes_per_sec": 0, 00:09:50.690 "w_mbytes_per_sec": 0 00:09:50.690 }, 00:09:50.690 "claimed": false, 00:09:50.690 "zoned": false, 00:09:50.690 "supported_io_types": { 00:09:50.690 "read": true, 00:09:50.690 "write": true, 00:09:50.690 "unmap": true, 00:09:50.690 "flush": true, 00:09:50.690 "reset": true, 00:09:50.690 "nvme_admin": false, 00:09:50.690 "nvme_io": false, 00:09:50.690 "nvme_io_md": false, 00:09:50.690 "write_zeroes": true, 00:09:50.690 "zcopy": true, 00:09:50.690 "get_zone_info": false, 00:09:50.690 "zone_management": false, 00:09:50.690 "zone_append": false, 00:09:50.690 "compare": false, 00:09:50.690 "compare_and_write": false, 00:09:50.690 "abort": true, 00:09:50.690 "seek_hole": false, 00:09:50.690 "seek_data": false, 00:09:50.690 "copy": true, 00:09:50.690 "nvme_iov_md": false 00:09:50.690 }, 00:09:50.690 "memory_domains": [ 00:09:50.690 { 00:09:50.690 "dma_device_id": "system", 00:09:50.690 "dma_device_type": 1 00:09:50.690 }, 00:09:50.690 { 00:09:50.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.690 "dma_device_type": 2 00:09:50.690 } 00:09:50.690 ], 00:09:50.690 "driver_specific": {} 00:09:50.690 } 00:09:50.690 ]' 00:09:50.690 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:50.690 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:50.690 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:50.690 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.690 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.690 [2024-12-06 13:04:37.596320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:50.690 [2024-12-06 13:04:37.596400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.690 [2024-12-06 13:04:37.596434] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:50.690 [2024-12-06 13:04:37.596456] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.690 [2024-12-06 13:04:37.599458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.690 [2024-12-06 13:04:37.599523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:50.690 Passthru0 00:09:50.690 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.690 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:50.690 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.690 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.690 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.690 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:50.690 { 00:09:50.690 "name": "Malloc0", 00:09:50.690 "aliases": [ 00:09:50.690 "60b06bb3-9473-4498-b610-8712bdb3a5a0" 00:09:50.690 ], 00:09:50.690 "product_name": "Malloc disk", 00:09:50.690 "block_size": 512, 00:09:50.690 "num_blocks": 16384, 00:09:50.690 "uuid": "60b06bb3-9473-4498-b610-8712bdb3a5a0", 00:09:50.690 "assigned_rate_limits": { 00:09:50.690 "rw_ios_per_sec": 0, 00:09:50.690 "rw_mbytes_per_sec": 0, 00:09:50.690 "r_mbytes_per_sec": 0, 00:09:50.690 "w_mbytes_per_sec": 0 00:09:50.690 }, 00:09:50.690 "claimed": true, 00:09:50.690 "claim_type": "exclusive_write", 00:09:50.690 "zoned": false, 00:09:50.690 "supported_io_types": { 00:09:50.690 "read": true, 00:09:50.690 "write": true, 00:09:50.690 "unmap": true, 00:09:50.690 "flush": true, 00:09:50.690 "reset": true, 00:09:50.690 "nvme_admin": false, 00:09:50.690 "nvme_io": false, 00:09:50.690 "nvme_io_md": false, 00:09:50.690 "write_zeroes": true, 00:09:50.690 "zcopy": true, 00:09:50.690 "get_zone_info": false, 00:09:50.690 "zone_management": false, 00:09:50.690 "zone_append": false, 00:09:50.690 "compare": false, 00:09:50.690 "compare_and_write": false, 00:09:50.690 "abort": true, 00:09:50.690 "seek_hole": false, 00:09:50.690 "seek_data": false, 00:09:50.690 "copy": true, 00:09:50.690 "nvme_iov_md": false 00:09:50.690 }, 00:09:50.690 "memory_domains": [ 00:09:50.690 { 00:09:50.690 "dma_device_id": "system", 00:09:50.690 "dma_device_type": 1 00:09:50.690 }, 00:09:50.690 { 00:09:50.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.690 "dma_device_type": 2 00:09:50.690 } 00:09:50.690 ], 00:09:50.690 "driver_specific": {} 00:09:50.690 }, 00:09:50.690 { 00:09:50.690 "name": "Passthru0", 00:09:50.690 "aliases": [ 00:09:50.690 "233e0269-4418-574d-a9ed-f6ab4660e99b" 00:09:50.690 ], 00:09:50.690 "product_name": "passthru", 00:09:50.690 "block_size": 512, 00:09:50.690 "num_blocks": 16384, 00:09:50.690 "uuid": "233e0269-4418-574d-a9ed-f6ab4660e99b", 00:09:50.690 "assigned_rate_limits": { 00:09:50.690 "rw_ios_per_sec": 0, 00:09:50.690 "rw_mbytes_per_sec": 0, 00:09:50.690 "r_mbytes_per_sec": 0, 00:09:50.690 "w_mbytes_per_sec": 0 00:09:50.690 }, 00:09:50.690 "claimed": false, 00:09:50.690 "zoned": false, 00:09:50.690 "supported_io_types": { 00:09:50.690 "read": true, 00:09:50.690 "write": true, 00:09:50.690 "unmap": true, 00:09:50.690 "flush": true, 00:09:50.690 "reset": true, 00:09:50.690 "nvme_admin": false, 00:09:50.690 "nvme_io": false, 00:09:50.690 "nvme_io_md": false, 00:09:50.690 "write_zeroes": true, 00:09:50.690 "zcopy": true, 00:09:50.690 "get_zone_info": false, 00:09:50.690 "zone_management": false, 00:09:50.690 "zone_append": false, 00:09:50.690 "compare": false, 00:09:50.690 "compare_and_write": false, 00:09:50.690 "abort": true, 00:09:50.690 "seek_hole": false, 00:09:50.690 "seek_data": false, 00:09:50.690 "copy": true, 00:09:50.690 "nvme_iov_md": false 00:09:50.690 }, 00:09:50.690 "memory_domains": [ 00:09:50.690 { 00:09:50.690 "dma_device_id": "system", 00:09:50.690 "dma_device_type": 1 00:09:50.690 }, 00:09:50.690 { 00:09:50.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.690 "dma_device_type": 2 00:09:50.690 } 00:09:50.690 ], 00:09:50.690 "driver_specific": { 00:09:50.690 "passthru": { 00:09:50.690 "name": "Passthru0", 00:09:50.690 "base_bdev_name": "Malloc0" 00:09:50.690 } 00:09:50.690 } 00:09:50.690 } 00:09:50.690 ]' 00:09:50.690 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:50.690 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:50.690 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:50.690 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.690 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.690 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.690 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:50.690 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.690 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.990 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.990 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:50.990 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.991 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.991 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.991 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:50.991 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:50.991 13:04:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:50.991 00:09:50.991 real 0m0.347s 00:09:50.991 user 0m0.208s 00:09:50.991 sys 0m0.041s 00:09:50.991 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.991 13:04:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:50.991 ************************************ 00:09:50.991 END TEST rpc_integrity 00:09:50.991 ************************************ 00:09:50.991 13:04:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:50.991 13:04:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.991 13:04:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.991 13:04:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.991 ************************************ 00:09:50.991 START TEST rpc_plugins 00:09:50.991 ************************************ 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:50.991 13:04:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.991 13:04:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:50.991 13:04:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.991 13:04:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:50.991 { 00:09:50.991 "name": "Malloc1", 00:09:50.991 "aliases": [ 00:09:50.991 "f67e150e-586b-402b-bf0a-fe40407c9fe7" 00:09:50.991 ], 00:09:50.991 "product_name": "Malloc disk", 00:09:50.991 "block_size": 4096, 00:09:50.991 "num_blocks": 256, 00:09:50.991 "uuid": "f67e150e-586b-402b-bf0a-fe40407c9fe7", 00:09:50.991 "assigned_rate_limits": { 00:09:50.991 "rw_ios_per_sec": 0, 00:09:50.991 "rw_mbytes_per_sec": 0, 00:09:50.991 "r_mbytes_per_sec": 0, 00:09:50.991 "w_mbytes_per_sec": 0 00:09:50.991 }, 00:09:50.991 "claimed": false, 00:09:50.991 "zoned": false, 00:09:50.991 "supported_io_types": { 00:09:50.991 "read": true, 00:09:50.991 "write": true, 00:09:50.991 "unmap": true, 00:09:50.991 "flush": true, 00:09:50.991 "reset": true, 00:09:50.991 "nvme_admin": false, 00:09:50.991 "nvme_io": false, 00:09:50.991 "nvme_io_md": false, 00:09:50.991 "write_zeroes": true, 00:09:50.991 "zcopy": true, 00:09:50.991 "get_zone_info": false, 00:09:50.991 "zone_management": false, 00:09:50.991 "zone_append": false, 00:09:50.991 "compare": false, 00:09:50.991 "compare_and_write": false, 00:09:50.991 "abort": true, 00:09:50.991 "seek_hole": false, 00:09:50.991 "seek_data": false, 00:09:50.991 "copy": true, 00:09:50.991 "nvme_iov_md": false 00:09:50.991 }, 00:09:50.991 "memory_domains": [ 00:09:50.991 { 00:09:50.991 "dma_device_id": "system", 00:09:50.991 "dma_device_type": 1 00:09:50.991 }, 00:09:50.991 { 00:09:50.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.991 "dma_device_type": 2 00:09:50.991 } 00:09:50.991 ], 00:09:50.991 "driver_specific": {} 00:09:50.991 } 00:09:50.991 ]' 00:09:50.991 13:04:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:50.991 13:04:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:50.991 13:04:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.991 13:04:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:50.991 13:04:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.991 13:04:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:50.991 13:04:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:51.249 13:04:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:51.249 00:09:51.249 real 0m0.169s 00:09:51.249 user 0m0.110s 00:09:51.249 sys 0m0.015s 00:09:51.249 13:04:38 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.249 ************************************ 00:09:51.249 13:04:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:51.249 END TEST rpc_plugins 00:09:51.249 ************************************ 00:09:51.249 13:04:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:51.249 13:04:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.249 13:04:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.249 13:04:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.249 ************************************ 00:09:51.249 START TEST rpc_trace_cmd_test 00:09:51.249 ************************************ 00:09:51.249 13:04:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:51.249 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:51.249 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:51.249 13:04:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.249 13:04:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.249 13:04:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.249 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:51.249 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56991", 00:09:51.249 "tpoint_group_mask": "0x8", 00:09:51.249 "iscsi_conn": { 00:09:51.249 "mask": "0x2", 00:09:51.249 "tpoint_mask": "0x0" 00:09:51.249 }, 00:09:51.249 "scsi": { 00:09:51.249 "mask": "0x4", 00:09:51.249 "tpoint_mask": "0x0" 00:09:51.249 }, 00:09:51.249 "bdev": { 00:09:51.249 "mask": "0x8", 00:09:51.249 "tpoint_mask": "0xffffffffffffffff" 00:09:51.249 }, 00:09:51.249 "nvmf_rdma": { 00:09:51.249 "mask": "0x10", 00:09:51.249 "tpoint_mask": "0x0" 00:09:51.249 }, 00:09:51.249 "nvmf_tcp": { 00:09:51.249 "mask": "0x20", 00:09:51.249 "tpoint_mask": "0x0" 00:09:51.249 }, 00:09:51.249 "ftl": { 00:09:51.249 "mask": "0x40", 00:09:51.249 "tpoint_mask": "0x0" 00:09:51.249 }, 00:09:51.249 "blobfs": { 00:09:51.249 "mask": "0x80", 00:09:51.249 "tpoint_mask": "0x0" 00:09:51.249 }, 00:09:51.249 "dsa": { 00:09:51.249 "mask": "0x200", 00:09:51.249 "tpoint_mask": "0x0" 00:09:51.249 }, 00:09:51.249 "thread": { 00:09:51.249 "mask": "0x400", 00:09:51.249 "tpoint_mask": "0x0" 00:09:51.249 }, 00:09:51.249 "nvme_pcie": { 00:09:51.249 "mask": "0x800", 00:09:51.249 "tpoint_mask": "0x0" 00:09:51.249 }, 00:09:51.249 "iaa": { 00:09:51.249 "mask": "0x1000", 00:09:51.250 "tpoint_mask": "0x0" 00:09:51.250 }, 00:09:51.250 "nvme_tcp": { 00:09:51.250 "mask": "0x2000", 00:09:51.250 "tpoint_mask": "0x0" 00:09:51.250 }, 00:09:51.250 "bdev_nvme": { 00:09:51.250 "mask": "0x4000", 00:09:51.250 "tpoint_mask": "0x0" 00:09:51.250 }, 00:09:51.250 "sock": { 00:09:51.250 "mask": "0x8000", 00:09:51.250 "tpoint_mask": "0x0" 00:09:51.250 }, 00:09:51.250 "blob": { 00:09:51.250 "mask": "0x10000", 00:09:51.250 "tpoint_mask": "0x0" 00:09:51.250 }, 00:09:51.250 "bdev_raid": { 00:09:51.250 "mask": "0x20000", 00:09:51.250 "tpoint_mask": "0x0" 00:09:51.250 }, 00:09:51.250 "scheduler": { 00:09:51.250 "mask": "0x40000", 00:09:51.250 "tpoint_mask": "0x0" 00:09:51.250 } 00:09:51.250 }' 00:09:51.250 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:51.250 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:51.250 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:51.250 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:51.250 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:51.250 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:51.250 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:51.528 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:51.528 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:51.528 13:04:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:51.528 00:09:51.528 real 0m0.278s 00:09:51.528 user 0m0.247s 00:09:51.528 sys 0m0.022s 00:09:51.528 13:04:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.528 ************************************ 00:09:51.528 END TEST rpc_trace_cmd_test 00:09:51.528 ************************************ 00:09:51.528 13:04:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.528 13:04:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:51.528 13:04:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:51.528 13:04:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:51.528 13:04:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.528 13:04:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.528 13:04:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.528 ************************************ 00:09:51.528 START TEST rpc_daemon_integrity 00:09:51.528 ************************************ 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:51.528 { 00:09:51.528 "name": "Malloc2", 00:09:51.528 "aliases": [ 00:09:51.528 "a29bdf55-c19e-4fba-b865-c1cdf001e2c0" 00:09:51.528 ], 00:09:51.528 "product_name": "Malloc disk", 00:09:51.528 "block_size": 512, 00:09:51.528 "num_blocks": 16384, 00:09:51.528 "uuid": "a29bdf55-c19e-4fba-b865-c1cdf001e2c0", 00:09:51.528 "assigned_rate_limits": { 00:09:51.528 "rw_ios_per_sec": 0, 00:09:51.528 "rw_mbytes_per_sec": 0, 00:09:51.528 "r_mbytes_per_sec": 0, 00:09:51.528 "w_mbytes_per_sec": 0 00:09:51.528 }, 00:09:51.528 "claimed": false, 00:09:51.528 "zoned": false, 00:09:51.528 "supported_io_types": { 00:09:51.528 "read": true, 00:09:51.528 "write": true, 00:09:51.528 "unmap": true, 00:09:51.528 "flush": true, 00:09:51.528 "reset": true, 00:09:51.528 "nvme_admin": false, 00:09:51.528 "nvme_io": false, 00:09:51.528 "nvme_io_md": false, 00:09:51.528 "write_zeroes": true, 00:09:51.528 "zcopy": true, 00:09:51.528 "get_zone_info": false, 00:09:51.528 "zone_management": false, 00:09:51.528 "zone_append": false, 00:09:51.528 "compare": false, 00:09:51.528 "compare_and_write": false, 00:09:51.528 "abort": true, 00:09:51.528 "seek_hole": false, 00:09:51.528 "seek_data": false, 00:09:51.528 "copy": true, 00:09:51.528 "nvme_iov_md": false 00:09:51.528 }, 00:09:51.528 "memory_domains": [ 00:09:51.528 { 00:09:51.528 "dma_device_id": "system", 00:09:51.528 "dma_device_type": 1 00:09:51.528 }, 00:09:51.528 { 00:09:51.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.528 "dma_device_type": 2 00:09:51.528 } 00:09:51.528 ], 00:09:51.528 "driver_specific": {} 00:09:51.528 } 00:09:51.528 ]' 00:09:51.528 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:51.786 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:51.786 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:51.786 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.786 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:51.786 [2024-12-06 13:04:38.558250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:51.786 [2024-12-06 13:04:38.558369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.786 [2024-12-06 13:04:38.558405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:51.786 [2024-12-06 13:04:38.558424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.786 [2024-12-06 13:04:38.561399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.786 [2024-12-06 13:04:38.561449] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:51.786 Passthru0 00:09:51.786 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.786 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:51.786 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.786 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:51.786 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.786 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:51.786 { 00:09:51.786 "name": "Malloc2", 00:09:51.786 "aliases": [ 00:09:51.786 "a29bdf55-c19e-4fba-b865-c1cdf001e2c0" 00:09:51.786 ], 00:09:51.786 "product_name": "Malloc disk", 00:09:51.786 "block_size": 512, 00:09:51.786 "num_blocks": 16384, 00:09:51.786 "uuid": "a29bdf55-c19e-4fba-b865-c1cdf001e2c0", 00:09:51.786 "assigned_rate_limits": { 00:09:51.786 "rw_ios_per_sec": 0, 00:09:51.786 "rw_mbytes_per_sec": 0, 00:09:51.786 "r_mbytes_per_sec": 0, 00:09:51.786 "w_mbytes_per_sec": 0 00:09:51.786 }, 00:09:51.786 "claimed": true, 00:09:51.786 "claim_type": "exclusive_write", 00:09:51.786 "zoned": false, 00:09:51.786 "supported_io_types": { 00:09:51.786 "read": true, 00:09:51.786 "write": true, 00:09:51.786 "unmap": true, 00:09:51.786 "flush": true, 00:09:51.786 "reset": true, 00:09:51.786 "nvme_admin": false, 00:09:51.786 "nvme_io": false, 00:09:51.786 "nvme_io_md": false, 00:09:51.786 "write_zeroes": true, 00:09:51.786 "zcopy": true, 00:09:51.786 "get_zone_info": false, 00:09:51.786 "zone_management": false, 00:09:51.786 "zone_append": false, 00:09:51.786 "compare": false, 00:09:51.786 "compare_and_write": false, 00:09:51.786 "abort": true, 00:09:51.786 "seek_hole": false, 00:09:51.786 "seek_data": false, 00:09:51.786 "copy": true, 00:09:51.786 "nvme_iov_md": false 00:09:51.786 }, 00:09:51.786 "memory_domains": [ 00:09:51.786 { 00:09:51.786 "dma_device_id": "system", 00:09:51.786 "dma_device_type": 1 00:09:51.787 }, 00:09:51.787 { 00:09:51.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.787 "dma_device_type": 2 00:09:51.787 } 00:09:51.787 ], 00:09:51.787 "driver_specific": {} 00:09:51.787 }, 00:09:51.787 { 00:09:51.787 "name": "Passthru0", 00:09:51.787 "aliases": [ 00:09:51.787 "187aff6c-86b9-53ec-84e2-d97edcb82161" 00:09:51.787 ], 00:09:51.787 "product_name": "passthru", 00:09:51.787 "block_size": 512, 00:09:51.787 "num_blocks": 16384, 00:09:51.787 "uuid": "187aff6c-86b9-53ec-84e2-d97edcb82161", 00:09:51.787 "assigned_rate_limits": { 00:09:51.787 "rw_ios_per_sec": 0, 00:09:51.787 "rw_mbytes_per_sec": 0, 00:09:51.787 "r_mbytes_per_sec": 0, 00:09:51.787 "w_mbytes_per_sec": 0 00:09:51.787 }, 00:09:51.787 "claimed": false, 00:09:51.787 "zoned": false, 00:09:51.787 "supported_io_types": { 00:09:51.787 "read": true, 00:09:51.787 "write": true, 00:09:51.787 "unmap": true, 00:09:51.787 "flush": true, 00:09:51.787 "reset": true, 00:09:51.787 "nvme_admin": false, 00:09:51.787 "nvme_io": false, 00:09:51.787 "nvme_io_md": false, 00:09:51.787 "write_zeroes": true, 00:09:51.787 "zcopy": true, 00:09:51.787 "get_zone_info": false, 00:09:51.787 "zone_management": false, 00:09:51.787 "zone_append": false, 00:09:51.787 "compare": false, 00:09:51.787 "compare_and_write": false, 00:09:51.787 "abort": true, 00:09:51.787 "seek_hole": false, 00:09:51.787 "seek_data": false, 00:09:51.787 "copy": true, 00:09:51.787 "nvme_iov_md": false 00:09:51.787 }, 00:09:51.787 "memory_domains": [ 00:09:51.787 { 00:09:51.787 "dma_device_id": "system", 00:09:51.787 "dma_device_type": 1 00:09:51.787 }, 00:09:51.787 { 00:09:51.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.787 "dma_device_type": 2 00:09:51.787 } 00:09:51.787 ], 00:09:51.787 "driver_specific": { 00:09:51.787 "passthru": { 00:09:51.787 "name": "Passthru0", 00:09:51.787 "base_bdev_name": "Malloc2" 00:09:51.787 } 00:09:51.787 } 00:09:51.787 } 00:09:51.787 ]' 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:51.787 00:09:51.787 real 0m0.348s 00:09:51.787 user 0m0.208s 00:09:51.787 sys 0m0.042s 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.787 13:04:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:51.787 ************************************ 00:09:51.787 END TEST rpc_daemon_integrity 00:09:51.787 ************************************ 00:09:51.787 13:04:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:51.787 13:04:38 rpc -- rpc/rpc.sh@84 -- # killprocess 56991 00:09:51.787 13:04:38 rpc -- common/autotest_common.sh@954 -- # '[' -z 56991 ']' 00:09:51.787 13:04:38 rpc -- common/autotest_common.sh@958 -- # kill -0 56991 00:09:51.787 13:04:38 rpc -- common/autotest_common.sh@959 -- # uname 00:09:51.787 13:04:38 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.787 13:04:38 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56991 00:09:52.045 13:04:38 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.045 13:04:38 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.045 13:04:38 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56991' 00:09:52.045 killing process with pid 56991 00:09:52.045 13:04:38 rpc -- common/autotest_common.sh@973 -- # kill 56991 00:09:52.045 13:04:38 rpc -- common/autotest_common.sh@978 -- # wait 56991 00:09:54.576 00:09:54.576 real 0m5.249s 00:09:54.576 user 0m5.906s 00:09:54.576 sys 0m0.899s 00:09:54.576 13:04:41 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.576 13:04:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.576 ************************************ 00:09:54.576 END TEST rpc 00:09:54.576 ************************************ 00:09:54.576 13:04:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:54.576 13:04:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.576 13:04:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.576 13:04:41 -- common/autotest_common.sh@10 -- # set +x 00:09:54.576 ************************************ 00:09:54.576 START TEST skip_rpc 00:09:54.576 ************************************ 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:54.576 * Looking for test storage... 00:09:54.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.576 13:04:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.576 --rc genhtml_branch_coverage=1 00:09:54.576 --rc genhtml_function_coverage=1 00:09:54.576 --rc genhtml_legend=1 00:09:54.576 --rc geninfo_all_blocks=1 00:09:54.576 --rc geninfo_unexecuted_blocks=1 00:09:54.576 00:09:54.576 ' 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.576 --rc genhtml_branch_coverage=1 00:09:54.576 --rc genhtml_function_coverage=1 00:09:54.576 --rc genhtml_legend=1 00:09:54.576 --rc geninfo_all_blocks=1 00:09:54.576 --rc geninfo_unexecuted_blocks=1 00:09:54.576 00:09:54.576 ' 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.576 --rc genhtml_branch_coverage=1 00:09:54.576 --rc genhtml_function_coverage=1 00:09:54.576 --rc genhtml_legend=1 00:09:54.576 --rc geninfo_all_blocks=1 00:09:54.576 --rc geninfo_unexecuted_blocks=1 00:09:54.576 00:09:54.576 ' 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.576 --rc genhtml_branch_coverage=1 00:09:54.576 --rc genhtml_function_coverage=1 00:09:54.576 --rc genhtml_legend=1 00:09:54.576 --rc geninfo_all_blocks=1 00:09:54.576 --rc geninfo_unexecuted_blocks=1 00:09:54.576 00:09:54.576 ' 00:09:54.576 13:04:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:54.576 13:04:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:54.576 13:04:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.576 13:04:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.577 13:04:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.577 ************************************ 00:09:54.577 START TEST skip_rpc 00:09:54.577 ************************************ 00:09:54.577 13:04:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:54.577 13:04:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57220 00:09:54.577 13:04:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:54.577 13:04:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:54.577 13:04:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:54.577 [2024-12-06 13:04:41.502134] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:09:54.577 [2024-12-06 13:04:41.502421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57220 ] 00:09:54.835 [2024-12-06 13:04:41.691720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.835 [2024-12-06 13:04:41.827001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57220 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57220 ']' 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57220 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57220 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.106 killing process with pid 57220 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57220' 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57220 00:10:00.106 13:04:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57220 00:10:02.006 00:10:02.006 real 0m7.292s 00:10:02.006 user 0m6.685s 00:10:02.006 sys 0m0.503s 00:10:02.006 13:04:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.006 13:04:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.006 ************************************ 00:10:02.006 END TEST skip_rpc 00:10:02.006 ************************************ 00:10:02.006 13:04:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:02.006 13:04:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.006 13:04:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.006 13:04:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.006 ************************************ 00:10:02.006 START TEST skip_rpc_with_json 00:10:02.006 ************************************ 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57324 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57324 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57324 ']' 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.006 13:04:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:02.006 [2024-12-06 13:04:48.839003] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:02.006 [2024-12-06 13:04:48.839216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57324 ] 00:10:02.264 [2024-12-06 13:04:49.025346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.264 [2024-12-06 13:04:49.167750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:03.199 [2024-12-06 13:04:50.104805] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:03.199 request: 00:10:03.199 { 00:10:03.199 "trtype": "tcp", 00:10:03.199 "method": "nvmf_get_transports", 00:10:03.199 "req_id": 1 00:10:03.199 } 00:10:03.199 Got JSON-RPC error response 00:10:03.199 response: 00:10:03.199 { 00:10:03.199 "code": -19, 00:10:03.199 "message": "No such device" 00:10:03.199 } 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:03.199 [2024-12-06 13:04:50.116957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.199 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:03.458 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.458 13:04:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:03.458 { 00:10:03.458 "subsystems": [ 00:10:03.458 { 00:10:03.458 "subsystem": "fsdev", 00:10:03.458 "config": [ 00:10:03.458 { 00:10:03.458 "method": "fsdev_set_opts", 00:10:03.458 "params": { 00:10:03.458 "fsdev_io_pool_size": 65535, 00:10:03.458 "fsdev_io_cache_size": 256 00:10:03.458 } 00:10:03.458 } 00:10:03.458 ] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "keyring", 00:10:03.458 "config": [] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "iobuf", 00:10:03.458 "config": [ 00:10:03.458 { 00:10:03.458 "method": "iobuf_set_options", 00:10:03.458 "params": { 00:10:03.458 "small_pool_count": 8192, 00:10:03.458 "large_pool_count": 1024, 00:10:03.458 "small_bufsize": 8192, 00:10:03.458 "large_bufsize": 135168, 00:10:03.458 "enable_numa": false 00:10:03.458 } 00:10:03.458 } 00:10:03.458 ] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "sock", 00:10:03.458 "config": [ 00:10:03.458 { 00:10:03.458 "method": "sock_set_default_impl", 00:10:03.458 "params": { 00:10:03.458 "impl_name": "posix" 00:10:03.458 } 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "method": "sock_impl_set_options", 00:10:03.458 "params": { 00:10:03.458 "impl_name": "ssl", 00:10:03.458 "recv_buf_size": 4096, 00:10:03.458 "send_buf_size": 4096, 00:10:03.458 "enable_recv_pipe": true, 00:10:03.458 "enable_quickack": false, 00:10:03.458 "enable_placement_id": 0, 00:10:03.458 "enable_zerocopy_send_server": true, 00:10:03.458 "enable_zerocopy_send_client": false, 00:10:03.458 "zerocopy_threshold": 0, 00:10:03.458 "tls_version": 0, 00:10:03.458 "enable_ktls": false 00:10:03.458 } 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "method": "sock_impl_set_options", 00:10:03.458 "params": { 00:10:03.458 "impl_name": "posix", 00:10:03.458 "recv_buf_size": 2097152, 00:10:03.458 "send_buf_size": 2097152, 00:10:03.458 "enable_recv_pipe": true, 00:10:03.458 "enable_quickack": false, 00:10:03.458 "enable_placement_id": 0, 00:10:03.458 "enable_zerocopy_send_server": true, 00:10:03.458 "enable_zerocopy_send_client": false, 00:10:03.458 "zerocopy_threshold": 0, 00:10:03.458 "tls_version": 0, 00:10:03.458 "enable_ktls": false 00:10:03.458 } 00:10:03.458 } 00:10:03.458 ] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "vmd", 00:10:03.458 "config": [] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "accel", 00:10:03.458 "config": [ 00:10:03.458 { 00:10:03.458 "method": "accel_set_options", 00:10:03.458 "params": { 00:10:03.458 "small_cache_size": 128, 00:10:03.458 "large_cache_size": 16, 00:10:03.458 "task_count": 2048, 00:10:03.458 "sequence_count": 2048, 00:10:03.458 "buf_count": 2048 00:10:03.458 } 00:10:03.458 } 00:10:03.458 ] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "bdev", 00:10:03.458 "config": [ 00:10:03.458 { 00:10:03.458 "method": "bdev_set_options", 00:10:03.458 "params": { 00:10:03.458 "bdev_io_pool_size": 65535, 00:10:03.458 "bdev_io_cache_size": 256, 00:10:03.458 "bdev_auto_examine": true, 00:10:03.458 "iobuf_small_cache_size": 128, 00:10:03.458 "iobuf_large_cache_size": 16 00:10:03.458 } 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "method": "bdev_raid_set_options", 00:10:03.458 "params": { 00:10:03.458 "process_window_size_kb": 1024, 00:10:03.458 "process_max_bandwidth_mb_sec": 0 00:10:03.458 } 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "method": "bdev_iscsi_set_options", 00:10:03.458 "params": { 00:10:03.458 "timeout_sec": 30 00:10:03.458 } 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "method": "bdev_nvme_set_options", 00:10:03.458 "params": { 00:10:03.458 "action_on_timeout": "none", 00:10:03.458 "timeout_us": 0, 00:10:03.458 "timeout_admin_us": 0, 00:10:03.458 "keep_alive_timeout_ms": 10000, 00:10:03.458 "arbitration_burst": 0, 00:10:03.458 "low_priority_weight": 0, 00:10:03.458 "medium_priority_weight": 0, 00:10:03.458 "high_priority_weight": 0, 00:10:03.458 "nvme_adminq_poll_period_us": 10000, 00:10:03.458 "nvme_ioq_poll_period_us": 0, 00:10:03.458 "io_queue_requests": 0, 00:10:03.458 "delay_cmd_submit": true, 00:10:03.458 "transport_retry_count": 4, 00:10:03.458 "bdev_retry_count": 3, 00:10:03.458 "transport_ack_timeout": 0, 00:10:03.458 "ctrlr_loss_timeout_sec": 0, 00:10:03.458 "reconnect_delay_sec": 0, 00:10:03.458 "fast_io_fail_timeout_sec": 0, 00:10:03.458 "disable_auto_failback": false, 00:10:03.458 "generate_uuids": false, 00:10:03.458 "transport_tos": 0, 00:10:03.458 "nvme_error_stat": false, 00:10:03.458 "rdma_srq_size": 0, 00:10:03.458 "io_path_stat": false, 00:10:03.458 "allow_accel_sequence": false, 00:10:03.458 "rdma_max_cq_size": 0, 00:10:03.458 "rdma_cm_event_timeout_ms": 0, 00:10:03.458 "dhchap_digests": [ 00:10:03.458 "sha256", 00:10:03.458 "sha384", 00:10:03.458 "sha512" 00:10:03.458 ], 00:10:03.458 "dhchap_dhgroups": [ 00:10:03.458 "null", 00:10:03.458 "ffdhe2048", 00:10:03.458 "ffdhe3072", 00:10:03.458 "ffdhe4096", 00:10:03.458 "ffdhe6144", 00:10:03.458 "ffdhe8192" 00:10:03.458 ] 00:10:03.458 } 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "method": "bdev_nvme_set_hotplug", 00:10:03.458 "params": { 00:10:03.458 "period_us": 100000, 00:10:03.458 "enable": false 00:10:03.458 } 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "method": "bdev_wait_for_examine" 00:10:03.458 } 00:10:03.458 ] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "scsi", 00:10:03.458 "config": null 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "scheduler", 00:10:03.458 "config": [ 00:10:03.458 { 00:10:03.458 "method": "framework_set_scheduler", 00:10:03.458 "params": { 00:10:03.458 "name": "static" 00:10:03.458 } 00:10:03.458 } 00:10:03.458 ] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "vhost_scsi", 00:10:03.458 "config": [] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "vhost_blk", 00:10:03.458 "config": [] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "ublk", 00:10:03.458 "config": [] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "nbd", 00:10:03.458 "config": [] 00:10:03.458 }, 00:10:03.458 { 00:10:03.458 "subsystem": "nvmf", 00:10:03.458 "config": [ 00:10:03.458 { 00:10:03.458 "method": "nvmf_set_config", 00:10:03.458 "params": { 00:10:03.458 "discovery_filter": "match_any", 00:10:03.459 "admin_cmd_passthru": { 00:10:03.459 "identify_ctrlr": false 00:10:03.459 }, 00:10:03.459 "dhchap_digests": [ 00:10:03.459 "sha256", 00:10:03.459 "sha384", 00:10:03.459 "sha512" 00:10:03.459 ], 00:10:03.459 "dhchap_dhgroups": [ 00:10:03.459 "null", 00:10:03.459 "ffdhe2048", 00:10:03.459 "ffdhe3072", 00:10:03.459 "ffdhe4096", 00:10:03.459 "ffdhe6144", 00:10:03.459 "ffdhe8192" 00:10:03.459 ] 00:10:03.459 } 00:10:03.459 }, 00:10:03.459 { 00:10:03.459 "method": "nvmf_set_max_subsystems", 00:10:03.459 "params": { 00:10:03.459 "max_subsystems": 1024 00:10:03.459 } 00:10:03.459 }, 00:10:03.459 { 00:10:03.459 "method": "nvmf_set_crdt", 00:10:03.459 "params": { 00:10:03.459 "crdt1": 0, 00:10:03.459 "crdt2": 0, 00:10:03.459 "crdt3": 0 00:10:03.459 } 00:10:03.459 }, 00:10:03.459 { 00:10:03.459 "method": "nvmf_create_transport", 00:10:03.459 "params": { 00:10:03.459 "trtype": "TCP", 00:10:03.459 "max_queue_depth": 128, 00:10:03.459 "max_io_qpairs_per_ctrlr": 127, 00:10:03.459 "in_capsule_data_size": 4096, 00:10:03.459 "max_io_size": 131072, 00:10:03.459 "io_unit_size": 131072, 00:10:03.459 "max_aq_depth": 128, 00:10:03.459 "num_shared_buffers": 511, 00:10:03.459 "buf_cache_size": 4294967295, 00:10:03.459 "dif_insert_or_strip": false, 00:10:03.459 "zcopy": false, 00:10:03.459 "c2h_success": true, 00:10:03.459 "sock_priority": 0, 00:10:03.459 "abort_timeout_sec": 1, 00:10:03.459 "ack_timeout": 0, 00:10:03.459 "data_wr_pool_size": 0 00:10:03.459 } 00:10:03.459 } 00:10:03.459 ] 00:10:03.459 }, 00:10:03.459 { 00:10:03.459 "subsystem": "iscsi", 00:10:03.459 "config": [ 00:10:03.459 { 00:10:03.459 "method": "iscsi_set_options", 00:10:03.459 "params": { 00:10:03.459 "node_base": "iqn.2016-06.io.spdk", 00:10:03.459 "max_sessions": 128, 00:10:03.459 "max_connections_per_session": 2, 00:10:03.459 "max_queue_depth": 64, 00:10:03.459 "default_time2wait": 2, 00:10:03.459 "default_time2retain": 20, 00:10:03.459 "first_burst_length": 8192, 00:10:03.459 "immediate_data": true, 00:10:03.459 "allow_duplicated_isid": false, 00:10:03.459 "error_recovery_level": 0, 00:10:03.459 "nop_timeout": 60, 00:10:03.459 "nop_in_interval": 30, 00:10:03.459 "disable_chap": false, 00:10:03.459 "require_chap": false, 00:10:03.459 "mutual_chap": false, 00:10:03.459 "chap_group": 0, 00:10:03.459 "max_large_datain_per_connection": 64, 00:10:03.459 "max_r2t_per_connection": 4, 00:10:03.459 "pdu_pool_size": 36864, 00:10:03.459 "immediate_data_pool_size": 16384, 00:10:03.459 "data_out_pool_size": 2048 00:10:03.459 } 00:10:03.459 } 00:10:03.459 ] 00:10:03.459 } 00:10:03.459 ] 00:10:03.459 } 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57324 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57324 ']' 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57324 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57324 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.459 killing process with pid 57324 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57324' 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57324 00:10:03.459 13:04:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57324 00:10:05.989 13:04:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57380 00:10:05.989 13:04:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:05.989 13:04:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:11.253 13:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57380 00:10:11.253 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57380 ']' 00:10:11.253 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57380 00:10:11.253 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:10:11.253 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.253 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57380 00:10:11.253 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.253 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.253 killing process with pid 57380 00:10:11.253 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57380' 00:10:11.253 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57380 00:10:11.253 13:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57380 00:10:13.155 13:04:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:13.155 13:04:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:13.155 00:10:13.155 real 0m11.282s 00:10:13.155 user 0m10.482s 00:10:13.155 sys 0m1.172s 00:10:13.155 13:04:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.155 13:04:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:13.155 ************************************ 00:10:13.155 END TEST skip_rpc_with_json 00:10:13.155 ************************************ 00:10:13.155 13:05:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:13.155 13:05:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.155 13:05:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.155 13:05:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.155 ************************************ 00:10:13.155 START TEST skip_rpc_with_delay 00:10:13.155 ************************************ 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:13.155 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:13.155 [2024-12-06 13:05:00.156048] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:13.414 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:10:13.414 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:13.414 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:13.414 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:13.414 00:10:13.414 real 0m0.191s 00:10:13.414 user 0m0.097s 00:10:13.414 sys 0m0.092s 00:10:13.414 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.414 ************************************ 00:10:13.414 END TEST skip_rpc_with_delay 00:10:13.414 ************************************ 00:10:13.414 13:05:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:13.414 13:05:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:13.415 13:05:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:13.415 13:05:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:13.415 13:05:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.415 13:05:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.415 13:05:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.415 ************************************ 00:10:13.415 START TEST exit_on_failed_rpc_init 00:10:13.415 ************************************ 00:10:13.415 13:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:10:13.415 13:05:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57508 00:10:13.415 13:05:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57508 00:10:13.415 13:05:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:13.415 13:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57508 ']' 00:10:13.415 13:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.415 13:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.415 13:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.415 13:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.415 13:05:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:13.674 [2024-12-06 13:05:00.435350] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:13.674 [2024-12-06 13:05:00.435579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57508 ] 00:10:13.674 [2024-12-06 13:05:00.623924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.932 [2024-12-06 13:05:00.770200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.867 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.867 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:10:14.867 13:05:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:14.867 13:05:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:14.867 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:10:14.867 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:14.867 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:14.867 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:14.867 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:14.867 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:14.867 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:14.868 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:14.868 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:14.868 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:14.868 13:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:14.868 [2024-12-06 13:05:01.845616] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:14.868 [2024-12-06 13:05:01.845832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57537 ] 00:10:15.126 [2024-12-06 13:05:02.041131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.385 [2024-12-06 13:05:02.237588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.385 [2024-12-06 13:05:02.237755] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:15.385 [2024-12-06 13:05:02.237784] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:15.385 [2024-12-06 13:05:02.237808] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57508 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57508 ']' 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57508 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57508 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.643 killing process with pid 57508 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57508' 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57508 00:10:15.643 13:05:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57508 00:10:18.173 00:10:18.173 real 0m4.688s 00:10:18.173 user 0m5.053s 00:10:18.173 sys 0m0.843s 00:10:18.173 13:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.173 13:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:18.173 ************************************ 00:10:18.173 END TEST exit_on_failed_rpc_init 00:10:18.173 ************************************ 00:10:18.173 13:05:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:18.173 00:10:18.173 real 0m23.892s 00:10:18.173 user 0m22.512s 00:10:18.173 sys 0m2.840s 00:10:18.173 13:05:05 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.173 13:05:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.173 ************************************ 00:10:18.173 END TEST skip_rpc 00:10:18.173 ************************************ 00:10:18.173 13:05:05 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:18.173 13:05:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.173 13:05:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.173 13:05:05 -- common/autotest_common.sh@10 -- # set +x 00:10:18.173 ************************************ 00:10:18.173 START TEST rpc_client 00:10:18.173 ************************************ 00:10:18.173 13:05:05 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:18.173 * Looking for test storage... 00:10:18.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:18.173 13:05:05 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.173 13:05:05 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.173 13:05:05 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:10:18.431 13:05:05 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.431 13:05:05 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:18.431 13:05:05 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.431 13:05:05 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:18.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.431 --rc genhtml_branch_coverage=1 00:10:18.431 --rc genhtml_function_coverage=1 00:10:18.431 --rc genhtml_legend=1 00:10:18.431 --rc geninfo_all_blocks=1 00:10:18.431 --rc geninfo_unexecuted_blocks=1 00:10:18.431 00:10:18.431 ' 00:10:18.431 13:05:05 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:18.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.431 --rc genhtml_branch_coverage=1 00:10:18.431 --rc genhtml_function_coverage=1 00:10:18.431 --rc genhtml_legend=1 00:10:18.431 --rc geninfo_all_blocks=1 00:10:18.431 --rc geninfo_unexecuted_blocks=1 00:10:18.431 00:10:18.431 ' 00:10:18.431 13:05:05 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:18.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.431 --rc genhtml_branch_coverage=1 00:10:18.431 --rc genhtml_function_coverage=1 00:10:18.431 --rc genhtml_legend=1 00:10:18.431 --rc geninfo_all_blocks=1 00:10:18.431 --rc geninfo_unexecuted_blocks=1 00:10:18.431 00:10:18.431 ' 00:10:18.431 13:05:05 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:18.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.431 --rc genhtml_branch_coverage=1 00:10:18.431 --rc genhtml_function_coverage=1 00:10:18.431 --rc genhtml_legend=1 00:10:18.431 --rc geninfo_all_blocks=1 00:10:18.431 --rc geninfo_unexecuted_blocks=1 00:10:18.431 00:10:18.431 ' 00:10:18.431 13:05:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:18.431 OK 00:10:18.431 13:05:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:18.431 00:10:18.431 real 0m0.270s 00:10:18.431 user 0m0.160s 00:10:18.431 sys 0m0.119s 00:10:18.431 13:05:05 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.431 13:05:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 ************************************ 00:10:18.431 END TEST rpc_client 00:10:18.431 ************************************ 00:10:18.431 13:05:05 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:18.431 13:05:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.431 13:05:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.431 13:05:05 -- common/autotest_common.sh@10 -- # set +x 00:10:18.431 ************************************ 00:10:18.431 START TEST json_config 00:10:18.431 ************************************ 00:10:18.431 13:05:05 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:18.690 13:05:05 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.690 13:05:05 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:10:18.690 13:05:05 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.690 13:05:05 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:18.690 13:05:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.690 13:05:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.690 13:05:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.690 13:05:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.690 13:05:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.690 13:05:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.690 13:05:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.690 13:05:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.690 13:05:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.690 13:05:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.690 13:05:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.690 13:05:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:18.690 13:05:05 json_config -- scripts/common.sh@345 -- # : 1 00:10:18.690 13:05:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.690 13:05:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.690 13:05:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:18.690 13:05:05 json_config -- scripts/common.sh@353 -- # local d=1 00:10:18.690 13:05:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.690 13:05:05 json_config -- scripts/common.sh@355 -- # echo 1 00:10:18.690 13:05:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.690 13:05:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:18.690 13:05:05 json_config -- scripts/common.sh@353 -- # local d=2 00:10:18.690 13:05:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.690 13:05:05 json_config -- scripts/common.sh@355 -- # echo 2 00:10:18.690 13:05:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.690 13:05:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.690 13:05:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.690 13:05:05 json_config -- scripts/common.sh@368 -- # return 0 00:10:18.690 13:05:05 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.690 13:05:05 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:18.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.690 --rc genhtml_branch_coverage=1 00:10:18.690 --rc genhtml_function_coverage=1 00:10:18.690 --rc genhtml_legend=1 00:10:18.690 --rc geninfo_all_blocks=1 00:10:18.690 --rc geninfo_unexecuted_blocks=1 00:10:18.690 00:10:18.690 ' 00:10:18.690 13:05:05 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:18.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.690 --rc genhtml_branch_coverage=1 00:10:18.690 --rc genhtml_function_coverage=1 00:10:18.690 --rc genhtml_legend=1 00:10:18.690 --rc geninfo_all_blocks=1 00:10:18.690 --rc geninfo_unexecuted_blocks=1 00:10:18.690 00:10:18.690 ' 00:10:18.690 13:05:05 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:18.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.690 --rc genhtml_branch_coverage=1 00:10:18.690 --rc genhtml_function_coverage=1 00:10:18.690 --rc genhtml_legend=1 00:10:18.690 --rc geninfo_all_blocks=1 00:10:18.690 --rc geninfo_unexecuted_blocks=1 00:10:18.690 00:10:18.690 ' 00:10:18.690 13:05:05 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:18.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.690 --rc genhtml_branch_coverage=1 00:10:18.690 --rc genhtml_function_coverage=1 00:10:18.690 --rc genhtml_legend=1 00:10:18.690 --rc geninfo_all_blocks=1 00:10:18.690 --rc geninfo_unexecuted_blocks=1 00:10:18.690 00:10:18.690 ' 00:10:18.690 13:05:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7f176c8d-8b6b-4f89-9f07-a020c6485b6a 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7f176c8d-8b6b-4f89-9f07-a020c6485b6a 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.690 13:05:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.690 13:05:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.690 13:05:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.690 13:05:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.690 13:05:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.690 13:05:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.690 13:05:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.690 13:05:05 json_config -- paths/export.sh@5 -- # export PATH 00:10:18.690 13:05:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@51 -- # : 0 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.690 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.690 13:05:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.690 13:05:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:18.690 13:05:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:18.690 13:05:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:18.690 13:05:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:18.690 13:05:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:18.690 WARNING: No tests are enabled so not running JSON configuration tests 00:10:18.690 13:05:05 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:18.690 13:05:05 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:18.690 00:10:18.690 real 0m0.215s 00:10:18.690 user 0m0.130s 00:10:18.690 sys 0m0.079s 00:10:18.690 ************************************ 00:10:18.690 END TEST json_config 00:10:18.690 ************************************ 00:10:18.690 13:05:05 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.690 13:05:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:18.690 13:05:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:18.690 13:05:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.690 13:05:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.690 13:05:05 -- common/autotest_common.sh@10 -- # set +x 00:10:18.690 ************************************ 00:10:18.690 START TEST json_config_extra_key 00:10:18.690 ************************************ 00:10:18.690 13:05:05 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:18.973 13:05:05 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.973 13:05:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:10:18.973 13:05:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.973 13:05:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:18.973 13:05:05 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.973 13:05:05 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:18.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.973 --rc genhtml_branch_coverage=1 00:10:18.973 --rc genhtml_function_coverage=1 00:10:18.973 --rc genhtml_legend=1 00:10:18.973 --rc geninfo_all_blocks=1 00:10:18.973 --rc geninfo_unexecuted_blocks=1 00:10:18.973 00:10:18.973 ' 00:10:18.973 13:05:05 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:18.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.973 --rc genhtml_branch_coverage=1 00:10:18.973 --rc genhtml_function_coverage=1 00:10:18.973 --rc genhtml_legend=1 00:10:18.973 --rc geninfo_all_blocks=1 00:10:18.973 --rc geninfo_unexecuted_blocks=1 00:10:18.973 00:10:18.973 ' 00:10:18.973 13:05:05 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:18.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.973 --rc genhtml_branch_coverage=1 00:10:18.973 --rc genhtml_function_coverage=1 00:10:18.973 --rc genhtml_legend=1 00:10:18.973 --rc geninfo_all_blocks=1 00:10:18.973 --rc geninfo_unexecuted_blocks=1 00:10:18.973 00:10:18.973 ' 00:10:18.973 13:05:05 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:18.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.973 --rc genhtml_branch_coverage=1 00:10:18.973 --rc genhtml_function_coverage=1 00:10:18.973 --rc genhtml_legend=1 00:10:18.973 --rc geninfo_all_blocks=1 00:10:18.973 --rc geninfo_unexecuted_blocks=1 00:10:18.973 00:10:18.973 ' 00:10:18.973 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7f176c8d-8b6b-4f89-9f07-a020c6485b6a 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7f176c8d-8b6b-4f89-9f07-a020c6485b6a 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.973 13:05:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.973 13:05:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.973 13:05:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.973 13:05:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.973 13:05:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:18.973 13:05:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.973 13:05:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.973 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.974 13:05:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.974 13:05:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.974 13:05:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:18.974 INFO: launching applications... 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:18.974 13:05:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:18.974 13:05:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:18.974 13:05:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:18.974 13:05:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:18.974 13:05:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:18.974 13:05:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:18.974 13:05:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:18.974 13:05:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:18.974 13:05:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57747 00:10:18.974 13:05:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:18.974 13:05:05 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:18.974 Waiting for target to run... 00:10:18.974 13:05:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57747 /var/tmp/spdk_tgt.sock 00:10:18.974 13:05:05 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57747 ']' 00:10:18.974 13:05:05 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:18.974 13:05:05 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.974 13:05:05 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:18.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:18.974 13:05:05 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.974 13:05:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:19.254 [2024-12-06 13:05:05.990966] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:19.254 [2024-12-06 13:05:05.991415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57747 ] 00:10:19.837 [2024-12-06 13:05:06.550922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.837 [2024-12-06 13:05:06.703327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.404 00:10:20.404 INFO: shutting down applications... 00:10:20.404 13:05:07 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.404 13:05:07 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:20.404 13:05:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:20.404 13:05:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:20.404 13:05:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:20.404 13:05:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:20.404 13:05:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:20.404 13:05:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57747 ]] 00:10:20.404 13:05:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57747 00:10:20.404 13:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:20.404 13:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:20.404 13:05:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57747 00:10:20.404 13:05:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:20.970 13:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:20.970 13:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:20.970 13:05:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57747 00:10:20.970 13:05:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:21.536 13:05:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:21.536 13:05:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:21.536 13:05:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57747 00:10:21.536 13:05:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:22.101 13:05:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:22.101 13:05:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:22.101 13:05:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57747 00:10:22.101 13:05:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:22.666 13:05:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:22.666 13:05:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:22.667 13:05:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57747 00:10:22.667 13:05:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:22.924 13:05:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:22.924 13:05:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:22.924 13:05:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57747 00:10:22.924 13:05:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:23.490 13:05:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:23.490 13:05:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:23.490 13:05:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57747 00:10:23.490 13:05:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:23.490 13:05:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:23.490 SPDK target shutdown done 00:10:23.490 Success 00:10:23.490 13:05:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:23.490 13:05:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:23.490 13:05:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:23.490 00:10:23.490 real 0m4.701s 00:10:23.490 user 0m4.138s 00:10:23.490 sys 0m0.776s 00:10:23.490 ************************************ 00:10:23.490 END TEST json_config_extra_key 00:10:23.490 ************************************ 00:10:23.490 13:05:10 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.490 13:05:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:23.490 13:05:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:23.490 13:05:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.490 13:05:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.490 13:05:10 -- common/autotest_common.sh@10 -- # set +x 00:10:23.490 ************************************ 00:10:23.490 START TEST alias_rpc 00:10:23.490 ************************************ 00:10:23.490 13:05:10 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:23.750 * Looking for test storage... 00:10:23.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.750 13:05:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:23.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.750 --rc genhtml_branch_coverage=1 00:10:23.750 --rc genhtml_function_coverage=1 00:10:23.750 --rc genhtml_legend=1 00:10:23.750 --rc geninfo_all_blocks=1 00:10:23.750 --rc geninfo_unexecuted_blocks=1 00:10:23.750 00:10:23.750 ' 00:10:23.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:23.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.750 --rc genhtml_branch_coverage=1 00:10:23.750 --rc genhtml_function_coverage=1 00:10:23.750 --rc genhtml_legend=1 00:10:23.750 --rc geninfo_all_blocks=1 00:10:23.750 --rc geninfo_unexecuted_blocks=1 00:10:23.750 00:10:23.750 ' 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:23.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.750 --rc genhtml_branch_coverage=1 00:10:23.750 --rc genhtml_function_coverage=1 00:10:23.750 --rc genhtml_legend=1 00:10:23.750 --rc geninfo_all_blocks=1 00:10:23.750 --rc geninfo_unexecuted_blocks=1 00:10:23.750 00:10:23.750 ' 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:23.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.750 --rc genhtml_branch_coverage=1 00:10:23.750 --rc genhtml_function_coverage=1 00:10:23.750 --rc genhtml_legend=1 00:10:23.750 --rc geninfo_all_blocks=1 00:10:23.750 --rc geninfo_unexecuted_blocks=1 00:10:23.750 00:10:23.750 ' 00:10:23.750 13:05:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:23.750 13:05:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57853 00:10:23.750 13:05:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:23.750 13:05:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57853 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57853 ']' 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.750 13:05:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.009 [2024-12-06 13:05:10.815858] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:24.009 [2024-12-06 13:05:10.817076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57853 ] 00:10:24.009 [2024-12-06 13:05:11.008972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.267 [2024-12-06 13:05:11.154282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.255 13:05:12 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.255 13:05:12 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:25.255 13:05:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:25.521 13:05:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57853 00:10:25.521 13:05:12 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57853 ']' 00:10:25.521 13:05:12 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57853 00:10:25.521 13:05:12 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:25.521 13:05:12 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.521 13:05:12 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57853 00:10:25.521 killing process with pid 57853 00:10:25.521 13:05:12 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.521 13:05:12 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.521 13:05:12 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57853' 00:10:25.521 13:05:12 alias_rpc -- common/autotest_common.sh@973 -- # kill 57853 00:10:25.521 13:05:12 alias_rpc -- common/autotest_common.sh@978 -- # wait 57853 00:10:28.056 00:10:28.056 real 0m4.436s 00:10:28.056 user 0m4.460s 00:10:28.056 sys 0m0.783s 00:10:28.056 13:05:14 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.056 13:05:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.056 ************************************ 00:10:28.056 END TEST alias_rpc 00:10:28.056 ************************************ 00:10:28.056 13:05:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:28.056 13:05:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:28.056 13:05:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:28.056 13:05:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.056 13:05:14 -- common/autotest_common.sh@10 -- # set +x 00:10:28.056 ************************************ 00:10:28.056 START TEST spdkcli_tcp 00:10:28.056 ************************************ 00:10:28.056 13:05:14 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:28.056 * Looking for test storage... 00:10:28.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:28.056 13:05:15 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:28.056 13:05:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:28.056 13:05:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:28.314 13:05:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.314 13:05:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:28.314 13:05:15 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.314 13:05:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:28.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.315 --rc genhtml_branch_coverage=1 00:10:28.315 --rc genhtml_function_coverage=1 00:10:28.315 --rc genhtml_legend=1 00:10:28.315 --rc geninfo_all_blocks=1 00:10:28.315 --rc geninfo_unexecuted_blocks=1 00:10:28.315 00:10:28.315 ' 00:10:28.315 13:05:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:28.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.315 --rc genhtml_branch_coverage=1 00:10:28.315 --rc genhtml_function_coverage=1 00:10:28.315 --rc genhtml_legend=1 00:10:28.315 --rc geninfo_all_blocks=1 00:10:28.315 --rc geninfo_unexecuted_blocks=1 00:10:28.315 00:10:28.315 ' 00:10:28.315 13:05:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:28.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.315 --rc genhtml_branch_coverage=1 00:10:28.315 --rc genhtml_function_coverage=1 00:10:28.315 --rc genhtml_legend=1 00:10:28.315 --rc geninfo_all_blocks=1 00:10:28.315 --rc geninfo_unexecuted_blocks=1 00:10:28.315 00:10:28.315 ' 00:10:28.315 13:05:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:28.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.315 --rc genhtml_branch_coverage=1 00:10:28.315 --rc genhtml_function_coverage=1 00:10:28.315 --rc genhtml_legend=1 00:10:28.315 --rc geninfo_all_blocks=1 00:10:28.315 --rc geninfo_unexecuted_blocks=1 00:10:28.315 00:10:28.315 ' 00:10:28.315 13:05:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:28.315 13:05:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:28.315 13:05:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:28.315 13:05:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:28.315 13:05:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:28.315 13:05:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:28.315 13:05:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:28.315 13:05:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:28.315 13:05:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:28.315 13:05:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57971 00:10:28.315 13:05:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57971 00:10:28.315 13:05:15 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57971 ']' 00:10:28.315 13:05:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:28.315 13:05:15 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.315 13:05:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.315 13:05:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.315 13:05:15 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.315 13:05:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:28.315 [2024-12-06 13:05:15.317017] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:28.315 [2024-12-06 13:05:15.317717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57971 ] 00:10:28.573 [2024-12-06 13:05:15.503131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:28.831 [2024-12-06 13:05:15.657855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.831 [2024-12-06 13:05:15.657860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.764 13:05:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.764 13:05:16 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:29.764 13:05:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57988 00:10:29.764 13:05:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:29.764 13:05:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:30.021 [ 00:10:30.021 "bdev_malloc_delete", 00:10:30.021 "bdev_malloc_create", 00:10:30.021 "bdev_null_resize", 00:10:30.021 "bdev_null_delete", 00:10:30.021 "bdev_null_create", 00:10:30.021 "bdev_nvme_cuse_unregister", 00:10:30.021 "bdev_nvme_cuse_register", 00:10:30.021 "bdev_opal_new_user", 00:10:30.021 "bdev_opal_set_lock_state", 00:10:30.021 "bdev_opal_delete", 00:10:30.021 "bdev_opal_get_info", 00:10:30.021 "bdev_opal_create", 00:10:30.021 "bdev_nvme_opal_revert", 00:10:30.021 "bdev_nvme_opal_init", 00:10:30.021 "bdev_nvme_send_cmd", 00:10:30.021 "bdev_nvme_set_keys", 00:10:30.021 "bdev_nvme_get_path_iostat", 00:10:30.021 "bdev_nvme_get_mdns_discovery_info", 00:10:30.021 "bdev_nvme_stop_mdns_discovery", 00:10:30.021 "bdev_nvme_start_mdns_discovery", 00:10:30.021 "bdev_nvme_set_multipath_policy", 00:10:30.021 "bdev_nvme_set_preferred_path", 00:10:30.021 "bdev_nvme_get_io_paths", 00:10:30.021 "bdev_nvme_remove_error_injection", 00:10:30.021 "bdev_nvme_add_error_injection", 00:10:30.021 "bdev_nvme_get_discovery_info", 00:10:30.021 "bdev_nvme_stop_discovery", 00:10:30.021 "bdev_nvme_start_discovery", 00:10:30.021 "bdev_nvme_get_controller_health_info", 00:10:30.021 "bdev_nvme_disable_controller", 00:10:30.021 "bdev_nvme_enable_controller", 00:10:30.021 "bdev_nvme_reset_controller", 00:10:30.021 "bdev_nvme_get_transport_statistics", 00:10:30.021 "bdev_nvme_apply_firmware", 00:10:30.021 "bdev_nvme_detach_controller", 00:10:30.021 "bdev_nvme_get_controllers", 00:10:30.021 "bdev_nvme_attach_controller", 00:10:30.021 "bdev_nvme_set_hotplug", 00:10:30.021 "bdev_nvme_set_options", 00:10:30.021 "bdev_passthru_delete", 00:10:30.021 "bdev_passthru_create", 00:10:30.021 "bdev_lvol_set_parent_bdev", 00:10:30.021 "bdev_lvol_set_parent", 00:10:30.021 "bdev_lvol_check_shallow_copy", 00:10:30.021 "bdev_lvol_start_shallow_copy", 00:10:30.021 "bdev_lvol_grow_lvstore", 00:10:30.021 "bdev_lvol_get_lvols", 00:10:30.021 "bdev_lvol_get_lvstores", 00:10:30.021 "bdev_lvol_delete", 00:10:30.021 "bdev_lvol_set_read_only", 00:10:30.021 "bdev_lvol_resize", 00:10:30.021 "bdev_lvol_decouple_parent", 00:10:30.021 "bdev_lvol_inflate", 00:10:30.021 "bdev_lvol_rename", 00:10:30.021 "bdev_lvol_clone_bdev", 00:10:30.021 "bdev_lvol_clone", 00:10:30.021 "bdev_lvol_snapshot", 00:10:30.021 "bdev_lvol_create", 00:10:30.021 "bdev_lvol_delete_lvstore", 00:10:30.021 "bdev_lvol_rename_lvstore", 00:10:30.021 "bdev_lvol_create_lvstore", 00:10:30.021 "bdev_raid_set_options", 00:10:30.021 "bdev_raid_remove_base_bdev", 00:10:30.021 "bdev_raid_add_base_bdev", 00:10:30.021 "bdev_raid_delete", 00:10:30.021 "bdev_raid_create", 00:10:30.021 "bdev_raid_get_bdevs", 00:10:30.021 "bdev_error_inject_error", 00:10:30.021 "bdev_error_delete", 00:10:30.021 "bdev_error_create", 00:10:30.021 "bdev_split_delete", 00:10:30.021 "bdev_split_create", 00:10:30.021 "bdev_delay_delete", 00:10:30.021 "bdev_delay_create", 00:10:30.021 "bdev_delay_update_latency", 00:10:30.021 "bdev_zone_block_delete", 00:10:30.021 "bdev_zone_block_create", 00:10:30.021 "blobfs_create", 00:10:30.021 "blobfs_detect", 00:10:30.021 "blobfs_set_cache_size", 00:10:30.021 "bdev_aio_delete", 00:10:30.021 "bdev_aio_rescan", 00:10:30.021 "bdev_aio_create", 00:10:30.021 "bdev_ftl_set_property", 00:10:30.021 "bdev_ftl_get_properties", 00:10:30.021 "bdev_ftl_get_stats", 00:10:30.021 "bdev_ftl_unmap", 00:10:30.021 "bdev_ftl_unload", 00:10:30.021 "bdev_ftl_delete", 00:10:30.021 "bdev_ftl_load", 00:10:30.021 "bdev_ftl_create", 00:10:30.021 "bdev_virtio_attach_controller", 00:10:30.021 "bdev_virtio_scsi_get_devices", 00:10:30.021 "bdev_virtio_detach_controller", 00:10:30.021 "bdev_virtio_blk_set_hotplug", 00:10:30.021 "bdev_iscsi_delete", 00:10:30.021 "bdev_iscsi_create", 00:10:30.021 "bdev_iscsi_set_options", 00:10:30.021 "accel_error_inject_error", 00:10:30.021 "ioat_scan_accel_module", 00:10:30.021 "dsa_scan_accel_module", 00:10:30.021 "iaa_scan_accel_module", 00:10:30.022 "keyring_file_remove_key", 00:10:30.022 "keyring_file_add_key", 00:10:30.022 "keyring_linux_set_options", 00:10:30.022 "fsdev_aio_delete", 00:10:30.022 "fsdev_aio_create", 00:10:30.022 "iscsi_get_histogram", 00:10:30.022 "iscsi_enable_histogram", 00:10:30.022 "iscsi_set_options", 00:10:30.022 "iscsi_get_auth_groups", 00:10:30.022 "iscsi_auth_group_remove_secret", 00:10:30.022 "iscsi_auth_group_add_secret", 00:10:30.022 "iscsi_delete_auth_group", 00:10:30.022 "iscsi_create_auth_group", 00:10:30.022 "iscsi_set_discovery_auth", 00:10:30.022 "iscsi_get_options", 00:10:30.022 "iscsi_target_node_request_logout", 00:10:30.022 "iscsi_target_node_set_redirect", 00:10:30.022 "iscsi_target_node_set_auth", 00:10:30.022 "iscsi_target_node_add_lun", 00:10:30.022 "iscsi_get_stats", 00:10:30.022 "iscsi_get_connections", 00:10:30.022 "iscsi_portal_group_set_auth", 00:10:30.022 "iscsi_start_portal_group", 00:10:30.022 "iscsi_delete_portal_group", 00:10:30.022 "iscsi_create_portal_group", 00:10:30.022 "iscsi_get_portal_groups", 00:10:30.022 "iscsi_delete_target_node", 00:10:30.022 "iscsi_target_node_remove_pg_ig_maps", 00:10:30.022 "iscsi_target_node_add_pg_ig_maps", 00:10:30.022 "iscsi_create_target_node", 00:10:30.022 "iscsi_get_target_nodes", 00:10:30.022 "iscsi_delete_initiator_group", 00:10:30.022 "iscsi_initiator_group_remove_initiators", 00:10:30.022 "iscsi_initiator_group_add_initiators", 00:10:30.022 "iscsi_create_initiator_group", 00:10:30.022 "iscsi_get_initiator_groups", 00:10:30.022 "nvmf_set_crdt", 00:10:30.022 "nvmf_set_config", 00:10:30.022 "nvmf_set_max_subsystems", 00:10:30.022 "nvmf_stop_mdns_prr", 00:10:30.022 "nvmf_publish_mdns_prr", 00:10:30.022 "nvmf_subsystem_get_listeners", 00:10:30.022 "nvmf_subsystem_get_qpairs", 00:10:30.022 "nvmf_subsystem_get_controllers", 00:10:30.022 "nvmf_get_stats", 00:10:30.022 "nvmf_get_transports", 00:10:30.022 "nvmf_create_transport", 00:10:30.022 "nvmf_get_targets", 00:10:30.022 "nvmf_delete_target", 00:10:30.022 "nvmf_create_target", 00:10:30.022 "nvmf_subsystem_allow_any_host", 00:10:30.022 "nvmf_subsystem_set_keys", 00:10:30.022 "nvmf_subsystem_remove_host", 00:10:30.022 "nvmf_subsystem_add_host", 00:10:30.022 "nvmf_ns_remove_host", 00:10:30.022 "nvmf_ns_add_host", 00:10:30.022 "nvmf_subsystem_remove_ns", 00:10:30.022 "nvmf_subsystem_set_ns_ana_group", 00:10:30.022 "nvmf_subsystem_add_ns", 00:10:30.022 "nvmf_subsystem_listener_set_ana_state", 00:10:30.022 "nvmf_discovery_get_referrals", 00:10:30.022 "nvmf_discovery_remove_referral", 00:10:30.022 "nvmf_discovery_add_referral", 00:10:30.022 "nvmf_subsystem_remove_listener", 00:10:30.022 "nvmf_subsystem_add_listener", 00:10:30.022 "nvmf_delete_subsystem", 00:10:30.022 "nvmf_create_subsystem", 00:10:30.022 "nvmf_get_subsystems", 00:10:30.022 "env_dpdk_get_mem_stats", 00:10:30.022 "nbd_get_disks", 00:10:30.022 "nbd_stop_disk", 00:10:30.022 "nbd_start_disk", 00:10:30.022 "ublk_recover_disk", 00:10:30.022 "ublk_get_disks", 00:10:30.022 "ublk_stop_disk", 00:10:30.022 "ublk_start_disk", 00:10:30.022 "ublk_destroy_target", 00:10:30.022 "ublk_create_target", 00:10:30.022 "virtio_blk_create_transport", 00:10:30.022 "virtio_blk_get_transports", 00:10:30.022 "vhost_controller_set_coalescing", 00:10:30.022 "vhost_get_controllers", 00:10:30.022 "vhost_delete_controller", 00:10:30.022 "vhost_create_blk_controller", 00:10:30.022 "vhost_scsi_controller_remove_target", 00:10:30.022 "vhost_scsi_controller_add_target", 00:10:30.022 "vhost_start_scsi_controller", 00:10:30.022 "vhost_create_scsi_controller", 00:10:30.022 "thread_set_cpumask", 00:10:30.022 "scheduler_set_options", 00:10:30.022 "framework_get_governor", 00:10:30.022 "framework_get_scheduler", 00:10:30.022 "framework_set_scheduler", 00:10:30.022 "framework_get_reactors", 00:10:30.022 "thread_get_io_channels", 00:10:30.022 "thread_get_pollers", 00:10:30.022 "thread_get_stats", 00:10:30.022 "framework_monitor_context_switch", 00:10:30.022 "spdk_kill_instance", 00:10:30.022 "log_enable_timestamps", 00:10:30.022 "log_get_flags", 00:10:30.022 "log_clear_flag", 00:10:30.022 "log_set_flag", 00:10:30.022 "log_get_level", 00:10:30.022 "log_set_level", 00:10:30.022 "log_get_print_level", 00:10:30.022 "log_set_print_level", 00:10:30.022 "framework_enable_cpumask_locks", 00:10:30.022 "framework_disable_cpumask_locks", 00:10:30.022 "framework_wait_init", 00:10:30.022 "framework_start_init", 00:10:30.022 "scsi_get_devices", 00:10:30.022 "bdev_get_histogram", 00:10:30.022 "bdev_enable_histogram", 00:10:30.022 "bdev_set_qos_limit", 00:10:30.022 "bdev_set_qd_sampling_period", 00:10:30.022 "bdev_get_bdevs", 00:10:30.022 "bdev_reset_iostat", 00:10:30.022 "bdev_get_iostat", 00:10:30.022 "bdev_examine", 00:10:30.022 "bdev_wait_for_examine", 00:10:30.022 "bdev_set_options", 00:10:30.022 "accel_get_stats", 00:10:30.022 "accel_set_options", 00:10:30.022 "accel_set_driver", 00:10:30.022 "accel_crypto_key_destroy", 00:10:30.022 "accel_crypto_keys_get", 00:10:30.022 "accel_crypto_key_create", 00:10:30.022 "accel_assign_opc", 00:10:30.022 "accel_get_module_info", 00:10:30.022 "accel_get_opc_assignments", 00:10:30.022 "vmd_rescan", 00:10:30.022 "vmd_remove_device", 00:10:30.022 "vmd_enable", 00:10:30.022 "sock_get_default_impl", 00:10:30.022 "sock_set_default_impl", 00:10:30.022 "sock_impl_set_options", 00:10:30.022 "sock_impl_get_options", 00:10:30.022 "iobuf_get_stats", 00:10:30.022 "iobuf_set_options", 00:10:30.022 "keyring_get_keys", 00:10:30.022 "framework_get_pci_devices", 00:10:30.022 "framework_get_config", 00:10:30.022 "framework_get_subsystems", 00:10:30.022 "fsdev_set_opts", 00:10:30.022 "fsdev_get_opts", 00:10:30.022 "trace_get_info", 00:10:30.022 "trace_get_tpoint_group_mask", 00:10:30.022 "trace_disable_tpoint_group", 00:10:30.022 "trace_enable_tpoint_group", 00:10:30.022 "trace_clear_tpoint_mask", 00:10:30.022 "trace_set_tpoint_mask", 00:10:30.022 "notify_get_notifications", 00:10:30.022 "notify_get_types", 00:10:30.022 "spdk_get_version", 00:10:30.022 "rpc_get_methods" 00:10:30.022 ] 00:10:30.022 13:05:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:30.022 13:05:16 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.022 13:05:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:30.022 13:05:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:30.022 13:05:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57971 00:10:30.022 13:05:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57971 ']' 00:10:30.022 13:05:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57971 00:10:30.022 13:05:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:30.022 13:05:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.022 13:05:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57971 00:10:30.022 13:05:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.022 13:05:17 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.022 killing process with pid 57971 00:10:30.022 13:05:17 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57971' 00:10:30.022 13:05:17 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57971 00:10:30.022 13:05:17 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57971 00:10:32.604 ************************************ 00:10:32.604 END TEST spdkcli_tcp 00:10:32.604 ************************************ 00:10:32.604 00:10:32.604 real 0m4.475s 00:10:32.604 user 0m7.934s 00:10:32.604 sys 0m0.816s 00:10:32.604 13:05:19 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.604 13:05:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.604 13:05:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:32.604 13:05:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.604 13:05:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.604 13:05:19 -- common/autotest_common.sh@10 -- # set +x 00:10:32.604 ************************************ 00:10:32.604 START TEST dpdk_mem_utility 00:10:32.604 ************************************ 00:10:32.604 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:32.604 * Looking for test storage... 00:10:32.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:32.604 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.604 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.604 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.862 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:32.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.862 13:05:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:32.862 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.862 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.862 --rc genhtml_branch_coverage=1 00:10:32.862 --rc genhtml_function_coverage=1 00:10:32.862 --rc genhtml_legend=1 00:10:32.862 --rc geninfo_all_blocks=1 00:10:32.862 --rc geninfo_unexecuted_blocks=1 00:10:32.862 00:10:32.862 ' 00:10:32.862 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.862 --rc genhtml_branch_coverage=1 00:10:32.862 --rc genhtml_function_coverage=1 00:10:32.862 --rc genhtml_legend=1 00:10:32.862 --rc geninfo_all_blocks=1 00:10:32.862 --rc geninfo_unexecuted_blocks=1 00:10:32.862 00:10:32.862 ' 00:10:32.862 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.862 --rc genhtml_branch_coverage=1 00:10:32.862 --rc genhtml_function_coverage=1 00:10:32.862 --rc genhtml_legend=1 00:10:32.862 --rc geninfo_all_blocks=1 00:10:32.862 --rc geninfo_unexecuted_blocks=1 00:10:32.862 00:10:32.862 ' 00:10:32.862 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.862 --rc genhtml_branch_coverage=1 00:10:32.862 --rc genhtml_function_coverage=1 00:10:32.862 --rc genhtml_legend=1 00:10:32.862 --rc geninfo_all_blocks=1 00:10:32.862 --rc geninfo_unexecuted_blocks=1 00:10:32.862 00:10:32.862 ' 00:10:32.862 13:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:32.862 13:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58093 00:10:32.862 13:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58093 00:10:32.863 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58093 ']' 00:10:32.863 13:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:32.863 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.863 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.863 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.863 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.863 13:05:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:32.863 [2024-12-06 13:05:19.834094] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:32.863 [2024-12-06 13:05:19.834578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58093 ] 00:10:33.120 [2024-12-06 13:05:20.025279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.378 [2024-12-06 13:05:20.198426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.315 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.315 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:34.315 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:34.315 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:34.315 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.315 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:34.315 { 00:10:34.315 "filename": "/tmp/spdk_mem_dump.txt" 00:10:34.315 } 00:10:34.315 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.315 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:34.315 DPDK memory size 824.000000 MiB in 1 heap(s) 00:10:34.315 1 heaps totaling size 824.000000 MiB 00:10:34.315 size: 824.000000 MiB heap id: 0 00:10:34.315 end heaps---------- 00:10:34.315 9 mempools totaling size 603.782043 MiB 00:10:34.315 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:34.315 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:34.315 size: 100.555481 MiB name: bdev_io_58093 00:10:34.315 size: 50.003479 MiB name: msgpool_58093 00:10:34.316 size: 36.509338 MiB name: fsdev_io_58093 00:10:34.316 size: 21.763794 MiB name: PDU_Pool 00:10:34.316 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:34.316 size: 4.133484 MiB name: evtpool_58093 00:10:34.316 size: 0.026123 MiB name: Session_Pool 00:10:34.316 end mempools------- 00:10:34.316 6 memzones totaling size 4.142822 MiB 00:10:34.316 size: 1.000366 MiB name: RG_ring_0_58093 00:10:34.316 size: 1.000366 MiB name: RG_ring_1_58093 00:10:34.316 size: 1.000366 MiB name: RG_ring_4_58093 00:10:34.316 size: 1.000366 MiB name: RG_ring_5_58093 00:10:34.316 size: 0.125366 MiB name: RG_ring_2_58093 00:10:34.316 size: 0.015991 MiB name: RG_ring_3_58093 00:10:34.316 end memzones------- 00:10:34.316 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:34.316 heap id: 0 total size: 824.000000 MiB number of busy elements: 316 number of free elements: 18 00:10:34.316 list of free elements. size: 16.781128 MiB 00:10:34.316 element at address: 0x200006400000 with size: 1.995972 MiB 00:10:34.316 element at address: 0x20000a600000 with size: 1.995972 MiB 00:10:34.316 element at address: 0x200003e00000 with size: 1.991028 MiB 00:10:34.316 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:34.316 element at address: 0x200019900040 with size: 0.999939 MiB 00:10:34.316 element at address: 0x200019a00000 with size: 0.999084 MiB 00:10:34.316 element at address: 0x200032600000 with size: 0.994324 MiB 00:10:34.316 element at address: 0x200000400000 with size: 0.992004 MiB 00:10:34.316 element at address: 0x200019200000 with size: 0.959656 MiB 00:10:34.316 element at address: 0x200019d00040 with size: 0.936401 MiB 00:10:34.316 element at address: 0x200000200000 with size: 0.716980 MiB 00:10:34.316 element at address: 0x20001b400000 with size: 0.562683 MiB 00:10:34.316 element at address: 0x200000c00000 with size: 0.489197 MiB 00:10:34.316 element at address: 0x200019600000 with size: 0.487976 MiB 00:10:34.316 element at address: 0x200019e00000 with size: 0.485413 MiB 00:10:34.316 element at address: 0x200012c00000 with size: 0.433228 MiB 00:10:34.316 element at address: 0x200028800000 with size: 0.390442 MiB 00:10:34.316 element at address: 0x200000800000 with size: 0.350891 MiB 00:10:34.316 list of standard malloc elements. size: 199.287964 MiB 00:10:34.316 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:10:34.316 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:10:34.316 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:34.316 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:34.316 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:10:34.316 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:34.316 element at address: 0x200019deff40 with size: 0.062683 MiB 00:10:34.316 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:34.316 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:10:34.316 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:10:34.316 element at address: 0x200012bff040 with size: 0.000305 MiB 00:10:34.316 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:10:34.316 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200000cff000 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:10:34.316 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200012bff180 with size: 0.000244 MiB 00:10:34.316 element at address: 0x200012bff280 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012bff380 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012bff480 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012bff580 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012bff680 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012bff780 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012bff880 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012bff980 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200019affc40 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:10:34.317 element at address: 0x200028863f40 with size: 0.000244 MiB 00:10:34.318 element at address: 0x200028864040 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886af80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886b080 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886b180 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886b280 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886b380 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886b480 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886b580 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886b680 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886b780 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886b880 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886b980 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886be80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886c080 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886c180 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886c280 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886c380 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886c480 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886c580 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886c680 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886c780 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886c880 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886c980 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886d080 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886d180 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886d280 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886d380 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886d480 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886d580 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886d680 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886d780 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886d880 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886d980 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886da80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886db80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886de80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886df80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886e080 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886e180 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886e280 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886e380 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886e480 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886e580 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886e680 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886e780 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886e880 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886e980 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886f080 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886f180 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886f280 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886f380 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886f480 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886f580 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886f680 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886f780 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886f880 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886f980 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:10:34.318 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:10:34.318 list of memzone associated elements. size: 607.930908 MiB 00:10:34.318 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:10:34.318 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:34.318 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:10:34.318 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:34.318 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:10:34.318 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58093_0 00:10:34.318 element at address: 0x200000dff340 with size: 48.003113 MiB 00:10:34.318 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58093_0 00:10:34.318 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:10:34.318 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58093_0 00:10:34.318 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:10:34.318 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:34.318 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:10:34.318 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:34.318 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:10:34.318 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58093_0 00:10:34.318 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:10:34.318 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58093 00:10:34.318 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:34.318 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58093 00:10:34.318 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:10:34.319 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:34.319 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:10:34.319 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:34.319 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:34.319 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:34.319 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:10:34.319 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:34.319 element at address: 0x200000cff100 with size: 1.000549 MiB 00:10:34.319 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58093 00:10:34.319 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:10:34.319 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58093 00:10:34.319 element at address: 0x200019affd40 with size: 1.000549 MiB 00:10:34.319 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58093 00:10:34.319 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:10:34.319 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58093 00:10:34.319 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:10:34.319 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58093 00:10:34.319 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:10:34.319 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58093 00:10:34.319 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:10:34.319 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:34.319 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:10:34.319 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:34.319 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:10:34.319 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:34.319 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:10:34.319 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58093 00:10:34.319 element at address: 0x20000085df80 with size: 0.125549 MiB 00:10:34.319 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58093 00:10:34.319 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:10:34.319 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:34.319 element at address: 0x200028864140 with size: 0.023804 MiB 00:10:34.319 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:34.319 element at address: 0x200000859d40 with size: 0.016174 MiB 00:10:34.319 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58093 00:10:34.319 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:10:34.319 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:34.319 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:10:34.319 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58093 00:10:34.319 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:10:34.319 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58093 00:10:34.319 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:10:34.319 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58093 00:10:34.319 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:10:34.319 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:34.319 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:34.319 13:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58093 00:10:34.319 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58093 ']' 00:10:34.319 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58093 00:10:34.319 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:34.585 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.585 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58093 00:10:34.585 killing process with pid 58093 00:10:34.585 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.585 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.585 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58093' 00:10:34.585 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58093 00:10:34.585 13:05:21 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58093 00:10:37.115 ************************************ 00:10:37.115 END TEST dpdk_mem_utility 00:10:37.115 ************************************ 00:10:37.115 00:10:37.115 real 0m4.322s 00:10:37.115 user 0m4.212s 00:10:37.115 sys 0m0.789s 00:10:37.115 13:05:23 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.115 13:05:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:37.115 13:05:23 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:37.115 13:05:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:37.115 13:05:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.115 13:05:23 -- common/autotest_common.sh@10 -- # set +x 00:10:37.115 ************************************ 00:10:37.115 START TEST event 00:10:37.115 ************************************ 00:10:37.115 13:05:23 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:37.115 * Looking for test storage... 00:10:37.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:37.115 13:05:23 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:37.115 13:05:23 event -- common/autotest_common.sh@1711 -- # lcov --version 00:10:37.115 13:05:23 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:37.115 13:05:24 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:37.116 13:05:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.116 13:05:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.116 13:05:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.116 13:05:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.116 13:05:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.116 13:05:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.116 13:05:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.116 13:05:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.116 13:05:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.116 13:05:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.116 13:05:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.116 13:05:24 event -- scripts/common.sh@344 -- # case "$op" in 00:10:37.116 13:05:24 event -- scripts/common.sh@345 -- # : 1 00:10:37.116 13:05:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.116 13:05:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.116 13:05:24 event -- scripts/common.sh@365 -- # decimal 1 00:10:37.116 13:05:24 event -- scripts/common.sh@353 -- # local d=1 00:10:37.116 13:05:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.116 13:05:24 event -- scripts/common.sh@355 -- # echo 1 00:10:37.116 13:05:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.116 13:05:24 event -- scripts/common.sh@366 -- # decimal 2 00:10:37.116 13:05:24 event -- scripts/common.sh@353 -- # local d=2 00:10:37.116 13:05:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.116 13:05:24 event -- scripts/common.sh@355 -- # echo 2 00:10:37.116 13:05:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.116 13:05:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.116 13:05:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.116 13:05:24 event -- scripts/common.sh@368 -- # return 0 00:10:37.116 13:05:24 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.116 13:05:24 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:37.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.116 --rc genhtml_branch_coverage=1 00:10:37.116 --rc genhtml_function_coverage=1 00:10:37.116 --rc genhtml_legend=1 00:10:37.116 --rc geninfo_all_blocks=1 00:10:37.116 --rc geninfo_unexecuted_blocks=1 00:10:37.116 00:10:37.116 ' 00:10:37.116 13:05:24 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:37.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.116 --rc genhtml_branch_coverage=1 00:10:37.116 --rc genhtml_function_coverage=1 00:10:37.116 --rc genhtml_legend=1 00:10:37.116 --rc geninfo_all_blocks=1 00:10:37.116 --rc geninfo_unexecuted_blocks=1 00:10:37.116 00:10:37.116 ' 00:10:37.116 13:05:24 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:37.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.116 --rc genhtml_branch_coverage=1 00:10:37.116 --rc genhtml_function_coverage=1 00:10:37.116 --rc genhtml_legend=1 00:10:37.116 --rc geninfo_all_blocks=1 00:10:37.116 --rc geninfo_unexecuted_blocks=1 00:10:37.116 00:10:37.116 ' 00:10:37.116 13:05:24 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:37.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.116 --rc genhtml_branch_coverage=1 00:10:37.116 --rc genhtml_function_coverage=1 00:10:37.116 --rc genhtml_legend=1 00:10:37.116 --rc geninfo_all_blocks=1 00:10:37.116 --rc geninfo_unexecuted_blocks=1 00:10:37.116 00:10:37.116 ' 00:10:37.116 13:05:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:37.116 13:05:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:37.116 13:05:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:37.116 13:05:24 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:37.116 13:05:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.116 13:05:24 event -- common/autotest_common.sh@10 -- # set +x 00:10:37.116 ************************************ 00:10:37.116 START TEST event_perf 00:10:37.116 ************************************ 00:10:37.116 13:05:24 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:37.373 Running I/O for 1 seconds...[2024-12-06 13:05:24.134407] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:37.374 [2024-12-06 13:05:24.134755] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58201 ] 00:10:37.374 [2024-12-06 13:05:24.324059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.632 [2024-12-06 13:05:24.481588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.632 [2024-12-06 13:05:24.481692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.632 Running I/O for 1 seconds...[2024-12-06 13:05:24.481984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.632 [2024-12-06 13:05:24.482669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.076 00:10:39.076 lcore 0: 117006 00:10:39.076 lcore 1: 117006 00:10:39.076 lcore 2: 117009 00:10:39.076 lcore 3: 117011 00:10:39.076 done. 00:10:39.076 ************************************ 00:10:39.076 END TEST event_perf 00:10:39.076 ************************************ 00:10:39.076 00:10:39.076 real 0m1.678s 00:10:39.076 user 0m4.402s 00:10:39.076 sys 0m0.145s 00:10:39.076 13:05:25 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.076 13:05:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:39.076 13:05:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:39.076 13:05:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:39.076 13:05:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.076 13:05:25 event -- common/autotest_common.sh@10 -- # set +x 00:10:39.076 ************************************ 00:10:39.076 START TEST event_reactor 00:10:39.076 ************************************ 00:10:39.076 13:05:25 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:39.076 [2024-12-06 13:05:25.874526] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:39.076 [2024-12-06 13:05:25.874947] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58246 ] 00:10:39.076 [2024-12-06 13:05:26.067360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.334 [2024-12-06 13:05:26.219943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.712 test_start 00:10:40.712 oneshot 00:10:40.712 tick 100 00:10:40.712 tick 100 00:10:40.712 tick 250 00:10:40.712 tick 100 00:10:40.712 tick 100 00:10:40.712 tick 100 00:10:40.712 tick 250 00:10:40.712 tick 500 00:10:40.712 tick 100 00:10:40.712 tick 100 00:10:40.712 tick 250 00:10:40.712 tick 100 00:10:40.712 tick 100 00:10:40.712 test_end 00:10:40.712 ************************************ 00:10:40.712 END TEST event_reactor 00:10:40.712 ************************************ 00:10:40.712 00:10:40.712 real 0m1.675s 00:10:40.712 user 0m1.443s 00:10:40.712 sys 0m0.121s 00:10:40.712 13:05:27 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.712 13:05:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:40.712 13:05:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:40.712 13:05:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:40.712 13:05:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.712 13:05:27 event -- common/autotest_common.sh@10 -- # set +x 00:10:40.712 ************************************ 00:10:40.712 START TEST event_reactor_perf 00:10:40.712 ************************************ 00:10:40.712 13:05:27 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:40.712 [2024-12-06 13:05:27.598111] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:40.712 [2024-12-06 13:05:27.598832] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58288 ] 00:10:40.971 [2024-12-06 13:05:27.786964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.971 [2024-12-06 13:05:27.946515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.367 test_start 00:10:42.367 test_end 00:10:42.367 Performance: 282697 events per second 00:10:42.367 00:10:42.367 real 0m1.620s 00:10:42.367 user 0m1.397s 00:10:42.367 sys 0m0.112s 00:10:42.367 13:05:29 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.367 ************************************ 00:10:42.367 END TEST event_reactor_perf 00:10:42.367 ************************************ 00:10:42.367 13:05:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:42.367 13:05:29 event -- event/event.sh@49 -- # uname -s 00:10:42.367 13:05:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:42.367 13:05:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:42.367 13:05:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:42.367 13:05:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.367 13:05:29 event -- common/autotest_common.sh@10 -- # set +x 00:10:42.367 ************************************ 00:10:42.367 START TEST event_scheduler 00:10:42.367 ************************************ 00:10:42.367 13:05:29 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:42.367 * Looking for test storage... 00:10:42.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:42.367 13:05:29 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:42.367 13:05:29 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:10:42.367 13:05:29 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:42.626 13:05:29 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.626 13:05:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:42.626 13:05:29 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.626 13:05:29 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:42.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.626 --rc genhtml_branch_coverage=1 00:10:42.626 --rc genhtml_function_coverage=1 00:10:42.626 --rc genhtml_legend=1 00:10:42.626 --rc geninfo_all_blocks=1 00:10:42.626 --rc geninfo_unexecuted_blocks=1 00:10:42.626 00:10:42.626 ' 00:10:42.626 13:05:29 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:42.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.626 --rc genhtml_branch_coverage=1 00:10:42.626 --rc genhtml_function_coverage=1 00:10:42.626 --rc genhtml_legend=1 00:10:42.626 --rc geninfo_all_blocks=1 00:10:42.626 --rc geninfo_unexecuted_blocks=1 00:10:42.626 00:10:42.626 ' 00:10:42.626 13:05:29 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:42.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.626 --rc genhtml_branch_coverage=1 00:10:42.626 --rc genhtml_function_coverage=1 00:10:42.626 --rc genhtml_legend=1 00:10:42.626 --rc geninfo_all_blocks=1 00:10:42.626 --rc geninfo_unexecuted_blocks=1 00:10:42.626 00:10:42.627 ' 00:10:42.627 13:05:29 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:42.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.627 --rc genhtml_branch_coverage=1 00:10:42.627 --rc genhtml_function_coverage=1 00:10:42.627 --rc genhtml_legend=1 00:10:42.627 --rc geninfo_all_blocks=1 00:10:42.627 --rc geninfo_unexecuted_blocks=1 00:10:42.627 00:10:42.627 ' 00:10:42.627 13:05:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:42.627 13:05:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58359 00:10:42.627 13:05:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:42.627 13:05:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58359 00:10:42.627 13:05:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:42.627 13:05:29 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58359 ']' 00:10:42.627 13:05:29 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.627 13:05:29 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.627 13:05:29 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.627 13:05:29 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.627 13:05:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:42.627 [2024-12-06 13:05:29.532517] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:42.627 [2024-12-06 13:05:29.532824] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58359 ] 00:10:42.885 [2024-12-06 13:05:29.718779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.885 [2024-12-06 13:05:29.899899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.144 [2024-12-06 13:05:29.900006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.144 [2024-12-06 13:05:29.900199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.145 [2024-12-06 13:05:29.901019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.711 13:05:30 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.711 13:05:30 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:43.711 13:05:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:43.711 13:05:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.711 13:05:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:43.711 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:43.711 POWER: Cannot set governor of lcore 0 to userspace 00:10:43.711 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:43.711 POWER: Cannot set governor of lcore 0 to performance 00:10:43.711 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:43.711 POWER: Cannot set governor of lcore 0 to userspace 00:10:43.711 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:43.711 POWER: Cannot set governor of lcore 0 to userspace 00:10:43.711 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:43.711 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:43.711 POWER: Unable to set Power Management Environment for lcore 0 00:10:43.711 [2024-12-06 13:05:30.507989] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:10:43.711 [2024-12-06 13:05:30.508127] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:10:43.711 [2024-12-06 13:05:30.508181] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:43.711 [2024-12-06 13:05:30.508341] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:43.712 [2024-12-06 13:05:30.508443] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:43.712 [2024-12-06 13:05:30.508511] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:43.712 13:05:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.712 13:05:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:43.712 13:05:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.712 13:05:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:43.971 [2024-12-06 13:05:30.873796] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:43.971 13:05:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.971 13:05:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:43.971 13:05:30 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:43.971 13:05:30 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.971 13:05:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:43.971 ************************************ 00:10:43.971 START TEST scheduler_create_thread 00:10:43.971 ************************************ 00:10:43.971 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:43.971 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.972 2 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.972 3 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.972 4 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.972 5 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.972 6 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.972 7 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.972 8 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.972 9 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.972 10 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:43.972 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.232 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:44.232 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:44.232 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.232 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.232 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.232 13:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:44.232 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.232 13:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:45.607 13:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.607 13:05:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:45.607 13:05:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:45.607 13:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.607 13:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:46.542 ************************************ 00:10:46.542 END TEST scheduler_create_thread 00:10:46.542 ************************************ 00:10:46.542 13:05:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.542 00:10:46.542 real 0m2.621s 00:10:46.542 user 0m0.019s 00:10:46.542 sys 0m0.006s 00:10:46.542 13:05:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.542 13:05:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:46.542 13:05:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:46.542 13:05:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58359 00:10:46.542 13:05:33 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58359 ']' 00:10:46.801 13:05:33 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58359 00:10:46.801 13:05:33 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:46.801 13:05:33 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.801 13:05:33 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58359 00:10:46.801 killing process with pid 58359 00:10:46.801 13:05:33 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:46.801 13:05:33 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:46.801 13:05:33 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58359' 00:10:46.801 13:05:33 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58359 00:10:46.801 13:05:33 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58359 00:10:47.058 [2024-12-06 13:05:33.989632] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:48.435 ************************************ 00:10:48.435 END TEST event_scheduler 00:10:48.435 ************************************ 00:10:48.435 00:10:48.435 real 0m5.936s 00:10:48.435 user 0m10.170s 00:10:48.435 sys 0m0.595s 00:10:48.435 13:05:35 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.435 13:05:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:48.435 13:05:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:48.435 13:05:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:48.435 13:05:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.435 13:05:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.435 13:05:35 event -- common/autotest_common.sh@10 -- # set +x 00:10:48.435 ************************************ 00:10:48.435 START TEST app_repeat 00:10:48.435 ************************************ 00:10:48.435 13:05:35 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:48.435 13:05:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.435 13:05:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.435 13:05:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:48.435 13:05:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:48.435 13:05:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:48.435 13:05:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:48.435 13:05:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:48.435 Process app_repeat pid: 58470 00:10:48.436 spdk_app_start Round 0 00:10:48.436 13:05:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58470 00:10:48.436 13:05:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:48.436 13:05:35 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:48.436 13:05:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58470' 00:10:48.436 13:05:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:48.436 13:05:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:48.436 13:05:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58470 /var/tmp/spdk-nbd.sock 00:10:48.436 13:05:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58470 ']' 00:10:48.436 13:05:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:48.436 13:05:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:48.436 13:05:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:48.436 13:05:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.436 13:05:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:48.436 [2024-12-06 13:05:35.300494] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:10:48.436 [2024-12-06 13:05:35.300923] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58470 ] 00:10:48.693 [2024-12-06 13:05:35.489903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.693 [2024-12-06 13:05:35.646340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.693 [2024-12-06 13:05:35.646355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.629 13:05:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.629 13:05:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:49.629 13:05:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:49.629 Malloc0 00:10:49.629 13:05:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:50.197 Malloc1 00:10:50.197 13:05:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:50.197 13:05:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.197 13:05:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:50.197 13:05:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:50.197 13:05:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:50.197 13:05:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:50.198 13:05:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:50.198 13:05:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.198 13:05:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:50.198 13:05:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:50.198 13:05:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:50.198 13:05:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:50.198 13:05:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:50.198 13:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:50.198 13:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:50.198 13:05:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:50.469 /dev/nbd0 00:10:50.469 13:05:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:50.469 13:05:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:50.469 1+0 records in 00:10:50.469 1+0 records out 00:10:50.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415228 s, 9.9 MB/s 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:50.469 13:05:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:50.469 13:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:50.469 13:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:50.469 13:05:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:50.729 /dev/nbd1 00:10:50.729 13:05:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:50.729 13:05:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:50.729 1+0 records in 00:10:50.729 1+0 records out 00:10:50.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388025 s, 10.6 MB/s 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:50.729 13:05:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:50.729 13:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:50.729 13:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:50.729 13:05:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:50.729 13:05:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:50.729 13:05:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:51.345 { 00:10:51.345 "nbd_device": "/dev/nbd0", 00:10:51.345 "bdev_name": "Malloc0" 00:10:51.345 }, 00:10:51.345 { 00:10:51.345 "nbd_device": "/dev/nbd1", 00:10:51.345 "bdev_name": "Malloc1" 00:10:51.345 } 00:10:51.345 ]' 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:51.345 { 00:10:51.345 "nbd_device": "/dev/nbd0", 00:10:51.345 "bdev_name": "Malloc0" 00:10:51.345 }, 00:10:51.345 { 00:10:51.345 "nbd_device": "/dev/nbd1", 00:10:51.345 "bdev_name": "Malloc1" 00:10:51.345 } 00:10:51.345 ]' 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:51.345 /dev/nbd1' 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:51.345 /dev/nbd1' 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:51.345 13:05:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:51.345 256+0 records in 00:10:51.345 256+0 records out 00:10:51.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00981879 s, 107 MB/s 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:51.346 256+0 records in 00:10:51.346 256+0 records out 00:10:51.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301486 s, 34.8 MB/s 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:51.346 256+0 records in 00:10:51.346 256+0 records out 00:10:51.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295447 s, 35.5 MB/s 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.346 13:05:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:51.604 13:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:51.605 13:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:51.605 13:05:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:51.605 13:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:51.605 13:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:51.605 13:05:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:51.605 13:05:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:51.605 13:05:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:51.605 13:05:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.605 13:05:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:52.172 13:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:52.172 13:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:52.172 13:05:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:52.172 13:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.172 13:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.172 13:05:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:52.172 13:05:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:52.173 13:05:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.173 13:05:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:52.173 13:05:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.173 13:05:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:52.431 13:05:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:52.431 13:05:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:52.997 13:05:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:54.369 [2024-12-06 13:05:40.969249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:54.369 [2024-12-06 13:05:41.113737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.369 [2024-12-06 13:05:41.113763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.369 [2024-12-06 13:05:41.328820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:54.369 [2024-12-06 13:05:41.328954] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:55.769 spdk_app_start Round 1 00:10:55.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:55.769 13:05:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:55.769 13:05:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:55.769 13:05:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58470 /var/tmp/spdk-nbd.sock 00:10:55.769 13:05:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58470 ']' 00:10:55.769 13:05:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:55.770 13:05:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.770 13:05:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:55.770 13:05:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.770 13:05:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:56.334 13:05:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.334 13:05:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:56.334 13:05:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:56.591 Malloc0 00:10:56.591 13:05:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:56.848 Malloc1 00:10:56.848 13:05:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:56.848 13:05:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:57.412 /dev/nbd0 00:10:57.412 13:05:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:57.412 13:05:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:57.412 1+0 records in 00:10:57.412 1+0 records out 00:10:57.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409269 s, 10.0 MB/s 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:57.412 13:05:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:57.413 13:05:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:57.413 13:05:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:57.413 13:05:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.413 13:05:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:57.413 13:05:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:57.669 /dev/nbd1 00:10:57.669 13:05:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:57.669 13:05:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:57.669 1+0 records in 00:10:57.669 1+0 records out 00:10:57.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345873 s, 11.8 MB/s 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:57.669 13:05:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:57.669 13:05:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.670 13:05:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:57.670 13:05:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:57.670 13:05:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:57.670 13:05:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:57.926 { 00:10:57.926 "nbd_device": "/dev/nbd0", 00:10:57.926 "bdev_name": "Malloc0" 00:10:57.926 }, 00:10:57.926 { 00:10:57.926 "nbd_device": "/dev/nbd1", 00:10:57.926 "bdev_name": "Malloc1" 00:10:57.926 } 00:10:57.926 ]' 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:57.926 { 00:10:57.926 "nbd_device": "/dev/nbd0", 00:10:57.926 "bdev_name": "Malloc0" 00:10:57.926 }, 00:10:57.926 { 00:10:57.926 "nbd_device": "/dev/nbd1", 00:10:57.926 "bdev_name": "Malloc1" 00:10:57.926 } 00:10:57.926 ]' 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:57.926 /dev/nbd1' 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:57.926 /dev/nbd1' 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:57.926 256+0 records in 00:10:57.926 256+0 records out 00:10:57.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00802195 s, 131 MB/s 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:57.926 256+0 records in 00:10:57.926 256+0 records out 00:10:57.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311291 s, 33.7 MB/s 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:57.926 13:05:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:58.236 256+0 records in 00:10:58.236 256+0 records out 00:10:58.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.035719 s, 29.4 MB/s 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:58.236 13:05:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:58.497 13:05:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:58.497 13:05:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:58.497 13:05:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:58.497 13:05:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:58.497 13:05:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:58.497 13:05:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:58.497 13:05:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:58.497 13:05:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:58.497 13:05:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:58.497 13:05:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:58.754 13:05:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:58.754 13:05:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:58.754 13:05:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:58.754 13:05:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:58.754 13:05:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:58.754 13:05:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:58.754 13:05:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:58.754 13:05:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:58.754 13:05:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:58.754 13:05:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.754 13:05:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:59.011 13:05:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:59.012 13:05:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:59.012 13:05:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:59.012 13:05:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:59.012 13:05:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:59.012 13:05:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:59.012 13:05:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:59.012 13:05:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:59.012 13:05:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:59.012 13:05:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:59.012 13:05:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:59.012 13:05:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:59.012 13:05:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:59.577 13:05:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:00.994 [2024-12-06 13:05:47.686527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:00.994 [2024-12-06 13:05:47.828337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.994 [2024-12-06 13:05:47.828338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.275 [2024-12-06 13:05:48.040199] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:01.275 [2024-12-06 13:05:48.040343] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:02.658 spdk_app_start Round 2 00:11:02.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:02.658 13:05:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:02.658 13:05:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:02.658 13:05:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58470 /var/tmp/spdk-nbd.sock 00:11:02.658 13:05:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58470 ']' 00:11:02.658 13:05:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:02.658 13:05:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.658 13:05:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:02.658 13:05:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.658 13:05:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:02.916 13:05:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.916 13:05:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:02.916 13:05:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:03.175 Malloc0 00:11:03.175 13:05:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:03.741 Malloc1 00:11:03.741 13:05:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:03.741 13:05:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.741 13:05:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:03.741 13:05:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:03.741 13:05:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:03.741 13:05:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:03.742 13:05:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:03.742 13:05:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.742 13:05:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:03.742 13:05:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:03.742 13:05:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:03.742 13:05:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:03.742 13:05:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:03.742 13:05:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:03.742 13:05:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:03.742 13:05:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:03.742 /dev/nbd0 00:11:04.000 13:05:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:04.000 13:05:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:04.000 1+0 records in 00:11:04.000 1+0 records out 00:11:04.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260589 s, 15.7 MB/s 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:04.000 13:05:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:04.000 13:05:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:04.000 13:05:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:04.000 13:05:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:04.259 /dev/nbd1 00:11:04.259 13:05:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:04.259 13:05:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:04.259 1+0 records in 00:11:04.259 1+0 records out 00:11:04.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342196 s, 12.0 MB/s 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:04.259 13:05:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:11:04.259 13:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:04.259 13:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:04.259 13:05:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:04.259 13:05:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.259 13:05:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:04.518 { 00:11:04.518 "nbd_device": "/dev/nbd0", 00:11:04.518 "bdev_name": "Malloc0" 00:11:04.518 }, 00:11:04.518 { 00:11:04.518 "nbd_device": "/dev/nbd1", 00:11:04.518 "bdev_name": "Malloc1" 00:11:04.518 } 00:11:04.518 ]' 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:04.518 { 00:11:04.518 "nbd_device": "/dev/nbd0", 00:11:04.518 "bdev_name": "Malloc0" 00:11:04.518 }, 00:11:04.518 { 00:11:04.518 "nbd_device": "/dev/nbd1", 00:11:04.518 "bdev_name": "Malloc1" 00:11:04.518 } 00:11:04.518 ]' 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:04.518 /dev/nbd1' 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:04.518 /dev/nbd1' 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:04.518 256+0 records in 00:11:04.518 256+0 records out 00:11:04.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00847169 s, 124 MB/s 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:04.518 13:05:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:04.777 256+0 records in 00:11:04.777 256+0 records out 00:11:04.777 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313964 s, 33.4 MB/s 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:04.777 256+0 records in 00:11:04.777 256+0 records out 00:11:04.777 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305889 s, 34.3 MB/s 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:04.777 13:05:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:05.036 13:05:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:05.036 13:05:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:05.036 13:05:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:05.036 13:05:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:05.036 13:05:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:05.036 13:05:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:05.036 13:05:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:05.036 13:05:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:05.036 13:05:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:05.036 13:05:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:05.294 13:05:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:05.294 13:05:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:05.294 13:05:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:05.294 13:05:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:05.294 13:05:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:05.294 13:05:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:05.294 13:05:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:05.294 13:05:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:05.294 13:05:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:05.294 13:05:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.294 13:05:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:05.553 13:05:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:05.553 13:05:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:06.187 13:05:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:07.581 [2024-12-06 13:05:54.159360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:07.581 [2024-12-06 13:05:54.293300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.581 [2024-12-06 13:05:54.293312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.581 [2024-12-06 13:05:54.513722] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:07.581 [2024-12-06 13:05:54.513869] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:09.479 13:05:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58470 /var/tmp/spdk-nbd.sock 00:11:09.479 13:05:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58470 ']' 00:11:09.479 13:05:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:09.479 13:05:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:09.479 13:05:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:09.479 13:05:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.479 13:05:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:11:09.479 13:05:56 event.app_repeat -- event/event.sh@39 -- # killprocess 58470 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58470 ']' 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58470 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58470 00:11:09.479 killing process with pid 58470 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58470' 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58470 00:11:09.479 13:05:56 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58470 00:11:10.414 spdk_app_start is called in Round 0. 00:11:10.414 Shutdown signal received, stop current app iteration 00:11:10.414 Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 reinitialization... 00:11:10.414 spdk_app_start is called in Round 1. 00:11:10.414 Shutdown signal received, stop current app iteration 00:11:10.414 Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 reinitialization... 00:11:10.414 spdk_app_start is called in Round 2. 00:11:10.414 Shutdown signal received, stop current app iteration 00:11:10.414 Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 reinitialization... 00:11:10.414 spdk_app_start is called in Round 3. 00:11:10.414 Shutdown signal received, stop current app iteration 00:11:10.414 13:05:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:10.414 13:05:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:10.414 00:11:10.414 real 0m22.141s 00:11:10.414 user 0m48.718s 00:11:10.414 sys 0m3.479s 00:11:10.414 13:05:57 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.414 ************************************ 00:11:10.414 END TEST app_repeat 00:11:10.414 ************************************ 00:11:10.414 13:05:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:10.414 13:05:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:10.414 13:05:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:10.414 13:05:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.414 13:05:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.414 13:05:57 event -- common/autotest_common.sh@10 -- # set +x 00:11:10.672 ************************************ 00:11:10.672 START TEST cpu_locks 00:11:10.672 ************************************ 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:10.672 * Looking for test storage... 00:11:10.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.672 13:05:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.672 --rc genhtml_branch_coverage=1 00:11:10.672 --rc genhtml_function_coverage=1 00:11:10.672 --rc genhtml_legend=1 00:11:10.672 --rc geninfo_all_blocks=1 00:11:10.672 --rc geninfo_unexecuted_blocks=1 00:11:10.672 00:11:10.672 ' 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.672 --rc genhtml_branch_coverage=1 00:11:10.672 --rc genhtml_function_coverage=1 00:11:10.672 --rc genhtml_legend=1 00:11:10.672 --rc geninfo_all_blocks=1 00:11:10.672 --rc geninfo_unexecuted_blocks=1 00:11:10.672 00:11:10.672 ' 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.672 --rc genhtml_branch_coverage=1 00:11:10.672 --rc genhtml_function_coverage=1 00:11:10.672 --rc genhtml_legend=1 00:11:10.672 --rc geninfo_all_blocks=1 00:11:10.672 --rc geninfo_unexecuted_blocks=1 00:11:10.672 00:11:10.672 ' 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:10.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.672 --rc genhtml_branch_coverage=1 00:11:10.672 --rc genhtml_function_coverage=1 00:11:10.672 --rc genhtml_legend=1 00:11:10.672 --rc geninfo_all_blocks=1 00:11:10.672 --rc geninfo_unexecuted_blocks=1 00:11:10.672 00:11:10.672 ' 00:11:10.672 13:05:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:10.672 13:05:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:10.672 13:05:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:10.672 13:05:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.672 13:05:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:10.672 ************************************ 00:11:10.672 START TEST default_locks 00:11:10.672 ************************************ 00:11:10.672 13:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:11:10.672 13:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58952 00:11:10.672 13:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58952 00:11:10.672 13:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:10.672 13:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58952 ']' 00:11:10.672 13:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.672 13:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.672 13:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.672 13:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.673 13:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:10.931 [2024-12-06 13:05:57.765804] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:10.931 [2024-12-06 13:05:57.766165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58952 ] 00:11:11.189 [2024-12-06 13:05:57.950142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.189 [2024-12-06 13:05:58.098893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.139 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.139 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:11:12.139 13:05:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58952 00:11:12.139 13:05:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58952 00:11:12.139 13:05:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:12.705 13:05:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58952 00:11:12.705 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58952 ']' 00:11:12.705 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58952 00:11:12.705 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:11:12.705 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.706 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58952 00:11:12.706 killing process with pid 58952 00:11:12.706 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.706 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.706 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58952' 00:11:12.706 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58952 00:11:12.706 13:05:59 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58952 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58952 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58952 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:15.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.237 ERROR: process (pid: 58952) is no longer running 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58952 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58952 ']' 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:15.237 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58952) - No such process 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:15.237 00:11:15.237 real 0m4.408s 00:11:15.237 user 0m4.344s 00:11:15.237 sys 0m0.850s 00:11:15.237 ************************************ 00:11:15.237 END TEST default_locks 00:11:15.237 ************************************ 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.237 13:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:15.237 13:06:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:15.237 13:06:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:15.237 13:06:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.237 13:06:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:15.237 ************************************ 00:11:15.237 START TEST default_locks_via_rpc 00:11:15.237 ************************************ 00:11:15.237 13:06:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:11:15.237 13:06:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59038 00:11:15.237 13:06:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59038 00:11:15.237 13:06:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59038 ']' 00:11:15.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.237 13:06:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.237 13:06:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:15.237 13:06:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.237 13:06:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.237 13:06:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.237 13:06:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.237 [2024-12-06 13:06:02.231017] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:15.237 [2024-12-06 13:06:02.231229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59038 ] 00:11:15.496 [2024-12-06 13:06:02.418556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.754 [2024-12-06 13:06:02.562886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59038 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:16.690 13:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59038 00:11:17.257 13:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59038 00:11:17.257 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59038 ']' 00:11:17.257 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59038 00:11:17.257 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:11:17.257 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.257 13:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59038 00:11:17.257 killing process with pid 59038 00:11:17.257 13:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.257 13:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.257 13:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59038' 00:11:17.257 13:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59038 00:11:17.257 13:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59038 00:11:19.806 00:11:19.806 real 0m4.359s 00:11:19.806 user 0m4.270s 00:11:19.806 sys 0m0.861s 00:11:19.806 13:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.807 13:06:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.807 ************************************ 00:11:19.807 END TEST default_locks_via_rpc 00:11:19.807 ************************************ 00:11:19.807 13:06:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:19.807 13:06:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:19.807 13:06:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.807 13:06:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:19.807 ************************************ 00:11:19.807 START TEST non_locking_app_on_locked_coremask 00:11:19.807 ************************************ 00:11:19.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.807 13:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:11:19.807 13:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59112 00:11:19.807 13:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59112 /var/tmp/spdk.sock 00:11:19.807 13:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:19.807 13:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59112 ']' 00:11:19.807 13:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.807 13:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.807 13:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.807 13:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.807 13:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:19.807 [2024-12-06 13:06:06.649436] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:19.807 [2024-12-06 13:06:06.649639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:11:20.066 [2024-12-06 13:06:06.834562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.066 [2024-12-06 13:06:06.995059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.002 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.002 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:21.002 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59128 00:11:21.002 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:21.002 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59128 /var/tmp/spdk2.sock 00:11:21.002 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59128 ']' 00:11:21.002 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:21.003 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:21.003 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:21.003 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.003 13:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:21.261 [2024-12-06 13:06:08.080133] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:21.261 [2024-12-06 13:06:08.080639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59128 ] 00:11:21.520 [2024-12-06 13:06:08.277492] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:21.520 [2024-12-06 13:06:08.281570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.784 [2024-12-06 13:06:08.574493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.315 13:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.315 13:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:24.315 13:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59112 00:11:24.315 13:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59112 00:11:24.315 13:06:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:24.881 13:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59112 00:11:24.881 13:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59112 ']' 00:11:24.881 13:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59112 00:11:24.881 13:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:24.881 13:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.881 13:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59112 00:11:24.881 killing process with pid 59112 00:11:24.881 13:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.881 13:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.881 13:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59112' 00:11:24.881 13:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59112 00:11:24.881 13:06:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59112 00:11:30.185 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59128 00:11:30.185 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59128 ']' 00:11:30.185 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59128 00:11:30.185 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:30.185 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.185 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59128 00:11:30.185 killing process with pid 59128 00:11:30.185 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.185 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.185 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59128' 00:11:30.185 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59128 00:11:30.185 13:06:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59128 00:11:32.160 ************************************ 00:11:32.160 END TEST non_locking_app_on_locked_coremask 00:11:32.160 ************************************ 00:11:32.160 00:11:32.160 real 0m12.161s 00:11:32.160 user 0m12.504s 00:11:32.160 sys 0m1.848s 00:11:32.160 13:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.160 13:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:32.160 13:06:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:32.160 13:06:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:32.160 13:06:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.160 13:06:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:32.160 ************************************ 00:11:32.160 START TEST locking_app_on_unlocked_coremask 00:11:32.160 ************************************ 00:11:32.160 13:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:11:32.160 13:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59287 00:11:32.160 13:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59287 /var/tmp/spdk.sock 00:11:32.160 13:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59287 ']' 00:11:32.160 13:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:32.160 13:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.160 13:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.160 13:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.160 13:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.160 13:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:32.160 [2024-12-06 13:06:18.847137] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:32.160 [2024-12-06 13:06:18.847642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59287 ] 00:11:32.160 [2024-12-06 13:06:19.020825] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:32.160 [2024-12-06 13:06:19.021010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.420 [2024-12-06 13:06:19.205124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.356 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.356 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:33.356 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59304 00:11:33.356 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:33.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:33.356 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59304 /var/tmp/spdk2.sock 00:11:33.356 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59304 ']' 00:11:33.356 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:33.356 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.356 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:33.356 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.356 13:06:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:33.615 [2024-12-06 13:06:20.428636] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:33.616 [2024-12-06 13:06:20.429138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59304 ] 00:11:33.875 [2024-12-06 13:06:20.636869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.134 [2024-12-06 13:06:20.916095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.666 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.666 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:36.666 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59304 00:11:36.666 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59304 00:11:36.666 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:36.925 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59287 00:11:36.925 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59287 ']' 00:11:36.925 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59287 00:11:36.925 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:36.925 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.925 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59287 00:11:37.183 killing process with pid 59287 00:11:37.183 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.183 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.183 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59287' 00:11:37.183 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59287 00:11:37.183 13:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59287 00:11:42.449 13:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59304 00:11:42.449 13:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59304 ']' 00:11:42.449 13:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59304 00:11:42.449 13:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:42.449 13:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.449 13:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59304 00:11:42.449 13:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.449 13:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.449 killing process with pid 59304 00:11:42.449 13:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59304' 00:11:42.449 13:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59304 00:11:42.449 13:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59304 00:11:44.394 00:11:44.394 real 0m12.337s 00:11:44.394 user 0m12.826s 00:11:44.394 sys 0m1.698s 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.394 ************************************ 00:11:44.394 END TEST locking_app_on_unlocked_coremask 00:11:44.394 ************************************ 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.394 13:06:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:44.394 13:06:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:44.394 13:06:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.394 13:06:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:44.394 ************************************ 00:11:44.394 START TEST locking_app_on_locked_coremask 00:11:44.394 ************************************ 00:11:44.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59460 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59460 /var/tmp/spdk.sock 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59460 ']' 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.394 13:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:44.394 [2024-12-06 13:06:31.284527] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:44.394 [2024-12-06 13:06:31.284718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59460 ] 00:11:44.653 [2024-12-06 13:06:31.469934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.653 [2024-12-06 13:06:31.604318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59481 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59481 /var/tmp/spdk2.sock 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59481 /var/tmp/spdk2.sock 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59481 /var/tmp/spdk2.sock 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59481 ']' 00:11:45.588 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:45.589 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.589 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:45.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:45.589 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.589 13:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:45.847 [2024-12-06 13:06:32.625205] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:45.847 [2024-12-06 13:06:32.625646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59481 ] 00:11:45.847 [2024-12-06 13:06:32.836212] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59460 has claimed it. 00:11:45.847 [2024-12-06 13:06:32.836291] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:46.415 ERROR: process (pid: 59481) is no longer running 00:11:46.415 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59481) - No such process 00:11:46.415 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.415 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:46.415 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:46.415 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.415 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:46.415 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.415 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59460 00:11:46.415 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59460 00:11:46.415 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:46.982 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59460 00:11:46.982 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59460 ']' 00:11:46.982 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59460 00:11:46.982 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:11:46.982 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.982 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59460 00:11:46.982 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.982 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.982 killing process with pid 59460 00:11:46.982 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59460' 00:11:46.982 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59460 00:11:46.982 13:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59460 00:11:49.505 00:11:49.505 real 0m4.890s 00:11:49.505 user 0m5.246s 00:11:49.506 sys 0m0.895s 00:11:49.506 13:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.506 ************************************ 00:11:49.506 END TEST locking_app_on_locked_coremask 00:11:49.506 ************************************ 00:11:49.506 13:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:49.506 13:06:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:49.506 13:06:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:49.506 13:06:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.506 13:06:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:49.506 ************************************ 00:11:49.506 START TEST locking_overlapped_coremask 00:11:49.506 ************************************ 00:11:49.506 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:11:49.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.506 13:06:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59551 00:11:49.506 13:06:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59551 /var/tmp/spdk.sock 00:11:49.506 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59551 ']' 00:11:49.506 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.506 13:06:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:49.506 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.506 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.506 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.506 13:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:49.506 [2024-12-06 13:06:36.192884] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:49.506 [2024-12-06 13:06:36.193055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59551 ] 00:11:49.506 [2024-12-06 13:06:36.369215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:49.763 [2024-12-06 13:06:36.520661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.763 [2024-12-06 13:06:36.520796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.763 [2024-12-06 13:06:36.520819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59569 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59569 /var/tmp/spdk2.sock 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59569 /var/tmp/spdk2.sock 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59569 /var/tmp/spdk2.sock 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59569 ']' 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:50.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.700 13:06:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:50.700 [2024-12-06 13:06:37.590900] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:50.700 [2024-12-06 13:06:37.591327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59569 ] 00:11:51.003 [2024-12-06 13:06:37.794650] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59551 has claimed it. 00:11:51.003 [2024-12-06 13:06:37.794786] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:51.570 ERROR: process (pid: 59569) is no longer running 00:11:51.570 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59569) - No such process 00:11:51.570 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.570 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59551 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59551 ']' 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59551 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59551 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59551' 00:11:51.571 killing process with pid 59551 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59551 00:11:51.571 13:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59551 00:11:54.102 00:11:54.102 real 0m4.608s 00:11:54.102 user 0m12.415s 00:11:54.102 sys 0m0.772s 00:11:54.102 13:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.102 13:06:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:54.102 ************************************ 00:11:54.102 END TEST locking_overlapped_coremask 00:11:54.102 ************************************ 00:11:54.103 13:06:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:54.103 13:06:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:54.103 13:06:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.103 13:06:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:54.103 ************************************ 00:11:54.103 START TEST locking_overlapped_coremask_via_rpc 00:11:54.103 ************************************ 00:11:54.103 13:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:11:54.103 13:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59633 00:11:54.103 13:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59633 /var/tmp/spdk.sock 00:11:54.103 13:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59633 ']' 00:11:54.103 13:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:54.103 13:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.103 13:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.103 13:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.103 13:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.103 13:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.103 [2024-12-06 13:06:40.883632] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:54.103 [2024-12-06 13:06:40.884129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59633 ] 00:11:54.103 [2024-12-06 13:06:41.073764] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:54.103 [2024-12-06 13:06:41.074031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:54.361 [2024-12-06 13:06:41.216312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.361 [2024-12-06 13:06:41.216461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.361 [2024-12-06 13:06:41.216502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.296 13:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.296 13:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:55.296 13:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59662 00:11:55.296 13:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:55.296 13:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59662 /var/tmp/spdk2.sock 00:11:55.296 13:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59662 ']' 00:11:55.296 13:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:55.296 13:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.296 13:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:55.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:55.296 13:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.296 13:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.296 [2024-12-06 13:06:42.238934] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:11:55.296 [2024-12-06 13:06:42.239409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59662 ] 00:11:55.555 [2024-12-06 13:06:42.443930] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:55.555 [2024-12-06 13:06:42.444010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.813 [2024-12-06 13:06:42.715585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.813 [2024-12-06 13:06:42.715698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.813 [2024-12-06 13:06:42.715712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:58.364 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.365 [2024-12-06 13:06:45.037704] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59633 has claimed it. 00:11:58.365 request: 00:11:58.365 { 00:11:58.365 "method": "framework_enable_cpumask_locks", 00:11:58.365 "req_id": 1 00:11:58.365 } 00:11:58.365 Got JSON-RPC error response 00:11:58.365 response: 00:11:58.365 { 00:11:58.365 "code": -32603, 00:11:58.365 "message": "Failed to claim CPU core: 2" 00:11:58.365 } 00:11:58.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59633 /var/tmp/spdk.sock 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59633 ']' 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59662 /var/tmp/spdk2.sock 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59662 ']' 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:58.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.365 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.932 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.932 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:58.932 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:58.932 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:58.932 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:58.932 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:58.932 ************************************ 00:11:58.932 END TEST locking_overlapped_coremask_via_rpc 00:11:58.932 ************************************ 00:11:58.932 00:11:58.932 real 0m4.910s 00:11:58.932 user 0m1.774s 00:11:58.932 sys 0m0.258s 00:11:58.932 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.932 13:06:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.932 13:06:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:58.932 13:06:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59633 ]] 00:11:58.932 13:06:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59633 00:11:58.932 13:06:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59633 ']' 00:11:58.932 13:06:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59633 00:11:58.932 13:06:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:11:58.932 13:06:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.933 13:06:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59633 00:11:58.933 killing process with pid 59633 00:11:58.933 13:06:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.933 13:06:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.933 13:06:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59633' 00:11:58.933 13:06:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59633 00:11:58.933 13:06:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59633 00:12:01.532 13:06:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59662 ]] 00:12:01.532 13:06:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59662 00:12:01.532 13:06:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59662 ']' 00:12:01.532 13:06:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59662 00:12:01.532 13:06:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:12:01.532 13:06:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.532 13:06:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59662 00:12:01.532 killing process with pid 59662 00:12:01.532 13:06:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:01.532 13:06:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:01.532 13:06:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59662' 00:12:01.532 13:06:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59662 00:12:01.532 13:06:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59662 00:12:03.451 13:06:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:03.451 13:06:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:03.451 13:06:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59633 ]] 00:12:03.451 13:06:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59633 00:12:03.451 13:06:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59633 ']' 00:12:03.451 13:06:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59633 00:12:03.451 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59633) - No such process 00:12:03.451 Process with pid 59633 is not found 00:12:03.451 13:06:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59633 is not found' 00:12:03.451 13:06:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59662 ]] 00:12:03.451 13:06:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59662 00:12:03.451 13:06:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59662 ']' 00:12:03.451 13:06:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59662 00:12:03.451 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59662) - No such process 00:12:03.451 Process with pid 59662 is not found 00:12:03.451 13:06:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59662 is not found' 00:12:03.451 13:06:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:03.451 00:12:03.451 real 0m52.886s 00:12:03.451 user 1m29.969s 00:12:03.451 sys 0m8.486s 00:12:03.451 ************************************ 00:12:03.451 END TEST cpu_locks 00:12:03.451 ************************************ 00:12:03.451 13:06:50 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.451 13:06:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:03.451 00:12:03.451 real 1m26.481s 00:12:03.451 user 2m36.301s 00:12:03.451 sys 0m13.256s 00:12:03.451 13:06:50 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.451 13:06:50 event -- common/autotest_common.sh@10 -- # set +x 00:12:03.451 ************************************ 00:12:03.451 END TEST event 00:12:03.451 ************************************ 00:12:03.451 13:06:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:03.451 13:06:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:03.451 13:06:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.451 13:06:50 -- common/autotest_common.sh@10 -- # set +x 00:12:03.451 ************************************ 00:12:03.451 START TEST thread 00:12:03.451 ************************************ 00:12:03.451 13:06:50 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:03.727 * Looking for test storage... 00:12:03.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:03.727 13:06:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.727 13:06:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.727 13:06:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.727 13:06:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.727 13:06:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.727 13:06:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.727 13:06:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.727 13:06:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.727 13:06:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.727 13:06:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.727 13:06:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.727 13:06:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:12:03.727 13:06:50 thread -- scripts/common.sh@345 -- # : 1 00:12:03.727 13:06:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.727 13:06:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.727 13:06:50 thread -- scripts/common.sh@365 -- # decimal 1 00:12:03.727 13:06:50 thread -- scripts/common.sh@353 -- # local d=1 00:12:03.727 13:06:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.727 13:06:50 thread -- scripts/common.sh@355 -- # echo 1 00:12:03.727 13:06:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.727 13:06:50 thread -- scripts/common.sh@366 -- # decimal 2 00:12:03.727 13:06:50 thread -- scripts/common.sh@353 -- # local d=2 00:12:03.727 13:06:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.727 13:06:50 thread -- scripts/common.sh@355 -- # echo 2 00:12:03.727 13:06:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.727 13:06:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.727 13:06:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.727 13:06:50 thread -- scripts/common.sh@368 -- # return 0 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:03.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.727 --rc genhtml_branch_coverage=1 00:12:03.727 --rc genhtml_function_coverage=1 00:12:03.727 --rc genhtml_legend=1 00:12:03.727 --rc geninfo_all_blocks=1 00:12:03.727 --rc geninfo_unexecuted_blocks=1 00:12:03.727 00:12:03.727 ' 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:03.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.727 --rc genhtml_branch_coverage=1 00:12:03.727 --rc genhtml_function_coverage=1 00:12:03.727 --rc genhtml_legend=1 00:12:03.727 --rc geninfo_all_blocks=1 00:12:03.727 --rc geninfo_unexecuted_blocks=1 00:12:03.727 00:12:03.727 ' 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:03.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.727 --rc genhtml_branch_coverage=1 00:12:03.727 --rc genhtml_function_coverage=1 00:12:03.727 --rc genhtml_legend=1 00:12:03.727 --rc geninfo_all_blocks=1 00:12:03.727 --rc geninfo_unexecuted_blocks=1 00:12:03.727 00:12:03.727 ' 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:03.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.727 --rc genhtml_branch_coverage=1 00:12:03.727 --rc genhtml_function_coverage=1 00:12:03.727 --rc genhtml_legend=1 00:12:03.727 --rc geninfo_all_blocks=1 00:12:03.727 --rc geninfo_unexecuted_blocks=1 00:12:03.727 00:12:03.727 ' 00:12:03.727 13:06:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.727 13:06:50 thread -- common/autotest_common.sh@10 -- # set +x 00:12:03.727 ************************************ 00:12:03.727 START TEST thread_poller_perf 00:12:03.727 ************************************ 00:12:03.727 13:06:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:03.727 [2024-12-06 13:06:50.652983] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:03.727 [2024-12-06 13:06:50.653129] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59857 ] 00:12:03.986 [2024-12-06 13:06:50.834284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.986 [2024-12-06 13:06:50.994108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.986 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:05.416 [2024-12-06T13:06:52.432Z] ====================================== 00:12:05.416 [2024-12-06T13:06:52.432Z] busy:2215073511 (cyc) 00:12:05.416 [2024-12-06T13:06:52.432Z] total_run_count: 299000 00:12:05.416 [2024-12-06T13:06:52.432Z] tsc_hz: 2200000000 (cyc) 00:12:05.416 [2024-12-06T13:06:52.432Z] ====================================== 00:12:05.416 [2024-12-06T13:06:52.432Z] poller_cost: 7408 (cyc), 3367 (nsec) 00:12:05.416 00:12:05.416 real 0m1.625s 00:12:05.417 user 0m1.407s 00:12:05.417 sys 0m0.108s 00:12:05.417 13:06:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.417 13:06:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:05.417 ************************************ 00:12:05.417 END TEST thread_poller_perf 00:12:05.417 ************************************ 00:12:05.417 13:06:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:05.417 13:06:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:12:05.417 13:06:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.417 13:06:52 thread -- common/autotest_common.sh@10 -- # set +x 00:12:05.417 ************************************ 00:12:05.417 START TEST thread_poller_perf 00:12:05.417 ************************************ 00:12:05.417 13:06:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:05.417 [2024-12-06 13:06:52.340324] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:05.417 [2024-12-06 13:06:52.340543] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59894 ] 00:12:05.675 [2024-12-06 13:06:52.533842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.933 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:05.933 [2024-12-06 13:06:52.693706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.308 [2024-12-06T13:06:54.324Z] ====================================== 00:12:07.308 [2024-12-06T13:06:54.324Z] busy:2206173489 (cyc) 00:12:07.308 [2024-12-06T13:06:54.324Z] total_run_count: 3285000 00:12:07.308 [2024-12-06T13:06:54.324Z] tsc_hz: 2200000000 (cyc) 00:12:07.308 [2024-12-06T13:06:54.324Z] ====================================== 00:12:07.308 [2024-12-06T13:06:54.324Z] poller_cost: 671 (cyc), 305 (nsec) 00:12:07.308 00:12:07.308 real 0m1.643s 00:12:07.308 user 0m1.412s 00:12:07.308 sys 0m0.118s 00:12:07.308 13:06:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.308 13:06:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:07.308 ************************************ 00:12:07.308 END TEST thread_poller_perf 00:12:07.308 ************************************ 00:12:07.308 13:06:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:07.308 00:12:07.308 real 0m3.579s 00:12:07.308 user 0m2.974s 00:12:07.308 sys 0m0.380s 00:12:07.308 13:06:53 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.308 13:06:53 thread -- common/autotest_common.sh@10 -- # set +x 00:12:07.308 ************************************ 00:12:07.308 END TEST thread 00:12:07.308 ************************************ 00:12:07.308 13:06:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:12:07.308 13:06:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:07.308 13:06:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:07.308 13:06:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.308 13:06:54 -- common/autotest_common.sh@10 -- # set +x 00:12:07.308 ************************************ 00:12:07.308 START TEST app_cmdline 00:12:07.308 ************************************ 00:12:07.308 13:06:54 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:07.308 * Looking for test storage... 00:12:07.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:07.308 13:06:54 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:07.308 13:06:54 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:12:07.308 13:06:54 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:07.308 13:06:54 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.308 13:06:54 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.309 13:06:54 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.309 13:06:54 app_cmdline -- scripts/common.sh@368 -- # return 0 00:12:07.309 13:06:54 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.309 13:06:54 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.309 --rc genhtml_branch_coverage=1 00:12:07.309 --rc genhtml_function_coverage=1 00:12:07.309 --rc genhtml_legend=1 00:12:07.309 --rc geninfo_all_blocks=1 00:12:07.309 --rc geninfo_unexecuted_blocks=1 00:12:07.309 00:12:07.309 ' 00:12:07.309 13:06:54 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.309 --rc genhtml_branch_coverage=1 00:12:07.309 --rc genhtml_function_coverage=1 00:12:07.309 --rc genhtml_legend=1 00:12:07.309 --rc geninfo_all_blocks=1 00:12:07.309 --rc geninfo_unexecuted_blocks=1 00:12:07.309 00:12:07.309 ' 00:12:07.309 13:06:54 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.309 --rc genhtml_branch_coverage=1 00:12:07.309 --rc genhtml_function_coverage=1 00:12:07.309 --rc genhtml_legend=1 00:12:07.309 --rc geninfo_all_blocks=1 00:12:07.309 --rc geninfo_unexecuted_blocks=1 00:12:07.309 00:12:07.309 ' 00:12:07.309 13:06:54 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:07.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.309 --rc genhtml_branch_coverage=1 00:12:07.309 --rc genhtml_function_coverage=1 00:12:07.309 --rc genhtml_legend=1 00:12:07.309 --rc geninfo_all_blocks=1 00:12:07.309 --rc geninfo_unexecuted_blocks=1 00:12:07.309 00:12:07.309 ' 00:12:07.309 13:06:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:07.309 13:06:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59977 00:12:07.309 13:06:54 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:07.309 13:06:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59977 00:12:07.309 13:06:54 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59977 ']' 00:12:07.309 13:06:54 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.309 13:06:54 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.309 13:06:54 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.309 13:06:54 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.309 13:06:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:07.568 [2024-12-06 13:06:54.338935] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:07.568 [2024-12-06 13:06:54.339170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59977 ] 00:12:07.568 [2024-12-06 13:06:54.523659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.827 [2024-12-06 13:06:54.685578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.826 13:06:55 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.826 13:06:55 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:12:08.826 13:06:55 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:09.084 { 00:12:09.084 "version": "SPDK v25.01-pre git sha1 e9db16374", 00:12:09.084 "fields": { 00:12:09.084 "major": 25, 00:12:09.084 "minor": 1, 00:12:09.084 "patch": 0, 00:12:09.084 "suffix": "-pre", 00:12:09.084 "commit": "e9db16374" 00:12:09.084 } 00:12:09.084 } 00:12:09.084 13:06:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:09.084 13:06:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:09.084 13:06:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:09.084 13:06:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:09.084 13:06:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:09.084 13:06:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.084 13:06:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.084 13:06:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:09.084 13:06:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:09.084 13:06:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:09.084 13:06:55 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:09.342 request: 00:12:09.342 { 00:12:09.342 "method": "env_dpdk_get_mem_stats", 00:12:09.342 "req_id": 1 00:12:09.342 } 00:12:09.342 Got JSON-RPC error response 00:12:09.342 response: 00:12:09.342 { 00:12:09.342 "code": -32601, 00:12:09.342 "message": "Method not found" 00:12:09.342 } 00:12:09.342 13:06:56 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:12:09.342 13:06:56 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:09.342 13:06:56 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:09.342 13:06:56 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:09.342 13:06:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59977 00:12:09.342 13:06:56 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59977 ']' 00:12:09.342 13:06:56 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59977 00:12:09.343 13:06:56 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:12:09.343 13:06:56 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.343 13:06:56 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59977 00:12:09.343 13:06:56 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.343 13:06:56 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.343 killing process with pid 59977 00:12:09.343 13:06:56 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59977' 00:12:09.343 13:06:56 app_cmdline -- common/autotest_common.sh@973 -- # kill 59977 00:12:09.343 13:06:56 app_cmdline -- common/autotest_common.sh@978 -- # wait 59977 00:12:11.874 00:12:11.874 real 0m4.583s 00:12:11.874 user 0m5.071s 00:12:11.874 sys 0m0.724s 00:12:11.874 13:06:58 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.874 13:06:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:11.874 ************************************ 00:12:11.874 END TEST app_cmdline 00:12:11.874 ************************************ 00:12:11.874 13:06:58 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:11.874 13:06:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:11.874 13:06:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.874 13:06:58 -- common/autotest_common.sh@10 -- # set +x 00:12:11.874 ************************************ 00:12:11.874 START TEST version 00:12:11.875 ************************************ 00:12:11.875 13:06:58 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:11.875 * Looking for test storage... 00:12:11.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:11.875 13:06:58 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:11.875 13:06:58 version -- common/autotest_common.sh@1711 -- # lcov --version 00:12:11.875 13:06:58 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:11.875 13:06:58 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:11.875 13:06:58 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.875 13:06:58 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.875 13:06:58 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.875 13:06:58 version -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.875 13:06:58 version -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.875 13:06:58 version -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.875 13:06:58 version -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.875 13:06:58 version -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.875 13:06:58 version -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.875 13:06:58 version -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.875 13:06:58 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.875 13:06:58 version -- scripts/common.sh@344 -- # case "$op" in 00:12:11.875 13:06:58 version -- scripts/common.sh@345 -- # : 1 00:12:11.875 13:06:58 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.875 13:06:58 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.875 13:06:58 version -- scripts/common.sh@365 -- # decimal 1 00:12:11.875 13:06:58 version -- scripts/common.sh@353 -- # local d=1 00:12:11.875 13:06:58 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.875 13:06:58 version -- scripts/common.sh@355 -- # echo 1 00:12:11.875 13:06:58 version -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.875 13:06:58 version -- scripts/common.sh@366 -- # decimal 2 00:12:11.875 13:06:58 version -- scripts/common.sh@353 -- # local d=2 00:12:11.875 13:06:58 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.875 13:06:58 version -- scripts/common.sh@355 -- # echo 2 00:12:11.875 13:06:58 version -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.875 13:06:58 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.875 13:06:58 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.875 13:06:58 version -- scripts/common.sh@368 -- # return 0 00:12:11.875 13:06:58 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.875 13:06:58 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:11.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.875 --rc genhtml_branch_coverage=1 00:12:11.875 --rc genhtml_function_coverage=1 00:12:11.875 --rc genhtml_legend=1 00:12:11.875 --rc geninfo_all_blocks=1 00:12:11.875 --rc geninfo_unexecuted_blocks=1 00:12:11.875 00:12:11.875 ' 00:12:11.875 13:06:58 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:11.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.875 --rc genhtml_branch_coverage=1 00:12:11.875 --rc genhtml_function_coverage=1 00:12:11.875 --rc genhtml_legend=1 00:12:11.875 --rc geninfo_all_blocks=1 00:12:11.875 --rc geninfo_unexecuted_blocks=1 00:12:11.875 00:12:11.875 ' 00:12:11.875 13:06:58 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:11.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.875 --rc genhtml_branch_coverage=1 00:12:11.875 --rc genhtml_function_coverage=1 00:12:11.875 --rc genhtml_legend=1 00:12:11.875 --rc geninfo_all_blocks=1 00:12:11.875 --rc geninfo_unexecuted_blocks=1 00:12:11.875 00:12:11.875 ' 00:12:11.875 13:06:58 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:11.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.875 --rc genhtml_branch_coverage=1 00:12:11.875 --rc genhtml_function_coverage=1 00:12:11.875 --rc genhtml_legend=1 00:12:11.875 --rc geninfo_all_blocks=1 00:12:11.875 --rc geninfo_unexecuted_blocks=1 00:12:11.875 00:12:11.875 ' 00:12:11.875 13:06:58 version -- app/version.sh@17 -- # get_header_version major 00:12:11.875 13:06:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:11.875 13:06:58 version -- app/version.sh@14 -- # cut -f2 00:12:11.875 13:06:58 version -- app/version.sh@14 -- # tr -d '"' 00:12:11.875 13:06:58 version -- app/version.sh@17 -- # major=25 00:12:12.133 13:06:58 version -- app/version.sh@18 -- # get_header_version minor 00:12:12.133 13:06:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:12.133 13:06:58 version -- app/version.sh@14 -- # cut -f2 00:12:12.133 13:06:58 version -- app/version.sh@14 -- # tr -d '"' 00:12:12.133 13:06:58 version -- app/version.sh@18 -- # minor=1 00:12:12.133 13:06:58 version -- app/version.sh@19 -- # get_header_version patch 00:12:12.133 13:06:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:12.133 13:06:58 version -- app/version.sh@14 -- # cut -f2 00:12:12.133 13:06:58 version -- app/version.sh@14 -- # tr -d '"' 00:12:12.133 13:06:58 version -- app/version.sh@19 -- # patch=0 00:12:12.133 13:06:58 version -- app/version.sh@20 -- # get_header_version suffix 00:12:12.133 13:06:58 version -- app/version.sh@14 -- # cut -f2 00:12:12.133 13:06:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:12.133 13:06:58 version -- app/version.sh@14 -- # tr -d '"' 00:12:12.133 13:06:58 version -- app/version.sh@20 -- # suffix=-pre 00:12:12.133 13:06:58 version -- app/version.sh@22 -- # version=25.1 00:12:12.133 13:06:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:12.133 13:06:58 version -- app/version.sh@28 -- # version=25.1rc0 00:12:12.133 13:06:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:12.133 13:06:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:12.133 13:06:58 version -- app/version.sh@30 -- # py_version=25.1rc0 00:12:12.133 13:06:58 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:12:12.133 00:12:12.133 real 0m0.272s 00:12:12.133 user 0m0.175s 00:12:12.133 sys 0m0.135s 00:12:12.133 13:06:58 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.133 13:06:58 version -- common/autotest_common.sh@10 -- # set +x 00:12:12.133 ************************************ 00:12:12.133 END TEST version 00:12:12.133 ************************************ 00:12:12.133 13:06:59 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:12:12.133 13:06:59 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:12:12.133 13:06:59 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:12.133 13:06:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:12.133 13:06:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.133 13:06:59 -- common/autotest_common.sh@10 -- # set +x 00:12:12.133 ************************************ 00:12:12.133 START TEST bdev_raid 00:12:12.133 ************************************ 00:12:12.133 13:06:59 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:12.133 * Looking for test storage... 00:12:12.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:12.133 13:06:59 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:12.133 13:06:59 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:12.133 13:06:59 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:12.392 13:06:59 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@345 -- # : 1 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.392 13:06:59 bdev_raid -- scripts/common.sh@368 -- # return 0 00:12:12.392 13:06:59 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.392 13:06:59 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:12.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.392 --rc genhtml_branch_coverage=1 00:12:12.392 --rc genhtml_function_coverage=1 00:12:12.392 --rc genhtml_legend=1 00:12:12.392 --rc geninfo_all_blocks=1 00:12:12.392 --rc geninfo_unexecuted_blocks=1 00:12:12.392 00:12:12.392 ' 00:12:12.392 13:06:59 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:12.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.392 --rc genhtml_branch_coverage=1 00:12:12.392 --rc genhtml_function_coverage=1 00:12:12.392 --rc genhtml_legend=1 00:12:12.392 --rc geninfo_all_blocks=1 00:12:12.392 --rc geninfo_unexecuted_blocks=1 00:12:12.392 00:12:12.392 ' 00:12:12.392 13:06:59 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:12.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.392 --rc genhtml_branch_coverage=1 00:12:12.392 --rc genhtml_function_coverage=1 00:12:12.392 --rc genhtml_legend=1 00:12:12.392 --rc geninfo_all_blocks=1 00:12:12.392 --rc geninfo_unexecuted_blocks=1 00:12:12.392 00:12:12.392 ' 00:12:12.392 13:06:59 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:12.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.392 --rc genhtml_branch_coverage=1 00:12:12.392 --rc genhtml_function_coverage=1 00:12:12.392 --rc genhtml_legend=1 00:12:12.392 --rc geninfo_all_blocks=1 00:12:12.392 --rc geninfo_unexecuted_blocks=1 00:12:12.392 00:12:12.392 ' 00:12:12.392 13:06:59 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:12.392 13:06:59 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:12:12.392 13:06:59 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:12:12.392 13:06:59 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:12:12.392 13:06:59 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:12:12.392 13:06:59 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:12:12.392 13:06:59 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:12:12.392 13:06:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:12.392 13:06:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.392 13:06:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.392 ************************************ 00:12:12.392 START TEST raid1_resize_data_offset_test 00:12:12.392 ************************************ 00:12:12.392 13:06:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:12:12.392 13:06:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60170 00:12:12.392 Process raid pid: 60170 00:12:12.392 13:06:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60170' 00:12:12.392 13:06:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60170 00:12:12.392 13:06:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60170 ']' 00:12:12.392 13:06:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:12.392 13:06:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.392 13:06:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.392 13:06:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.392 13:06:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.392 13:06:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.392 [2024-12-06 13:06:59.338999] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:12.392 [2024-12-06 13:06:59.339232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.650 [2024-12-06 13:06:59.519219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.908 [2024-12-06 13:06:59.669407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.908 [2024-12-06 13:06:59.902182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.908 [2024-12-06 13:06:59.902260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.474 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.474 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:12:13.474 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:12:13.474 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.474 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.731 malloc0 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.731 malloc1 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.731 null0 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.731 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.731 [2024-12-06 13:07:00.618414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:12:13.731 [2024-12-06 13:07:00.621069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:13.731 [2024-12-06 13:07:00.621158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:12:13.732 [2024-12-06 13:07:00.621389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:13.732 [2024-12-06 13:07:00.621421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:12:13.732 [2024-12-06 13:07:00.621762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:13.732 [2024-12-06 13:07:00.622037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:13.732 [2024-12-06 13:07:00.622070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:13.732 [2024-12-06 13:07:00.622252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.732 [2024-12-06 13:07:00.682454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.732 13:07:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.297 malloc2 00:12:14.297 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.297 13:07:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:12:14.297 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.297 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.297 [2024-12-06 13:07:01.297435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:14.555 [2024-12-06 13:07:01.315941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:14.555 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.556 [2024-12-06 13:07:01.318583] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60170 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60170 ']' 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60170 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60170 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.556 killing process with pid 60170 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60170' 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60170 00:12:14.556 13:07:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60170 00:12:14.556 [2024-12-06 13:07:01.411227] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.556 [2024-12-06 13:07:01.412137] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:12:14.556 [2024-12-06 13:07:01.412234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.556 [2024-12-06 13:07:01.412264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:12:14.556 [2024-12-06 13:07:01.444369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.556 [2024-12-06 13:07:01.444862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.556 [2024-12-06 13:07:01.444896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:16.456 [2024-12-06 13:07:03.202220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.392 13:07:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:12:17.392 00:12:17.392 real 0m5.122s 00:12:17.392 user 0m4.958s 00:12:17.392 sys 0m0.837s 00:12:17.392 13:07:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.392 13:07:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.392 ************************************ 00:12:17.392 END TEST raid1_resize_data_offset_test 00:12:17.392 ************************************ 00:12:17.392 13:07:04 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:12:17.392 13:07:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:17.392 13:07:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.392 13:07:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.651 ************************************ 00:12:17.651 START TEST raid0_resize_superblock_test 00:12:17.651 ************************************ 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60259 00:12:17.651 Process raid pid: 60259 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60259' 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60259 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60259 ']' 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.651 13:07:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.651 [2024-12-06 13:07:04.526400] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:17.651 [2024-12-06 13:07:04.526683] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.910 [2024-12-06 13:07:04.747241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.910 [2024-12-06 13:07:04.891381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.169 [2024-12-06 13:07:05.117619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.169 [2024-12-06 13:07:05.117682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.734 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.734 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:18.734 13:07:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:12:18.734 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.734 13:07:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.301 malloc0 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.301 [2024-12-06 13:07:06.192550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:19.301 [2024-12-06 13:07:06.192647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.301 [2024-12-06 13:07:06.192693] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:19.301 [2024-12-06 13:07:06.192717] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.301 [2024-12-06 13:07:06.195720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.301 [2024-12-06 13:07:06.195775] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:19.301 pt0 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.301 65dd623f-b5f2-4594-a662-c6df7edb7e61 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.301 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.560 fa9084b4-f425-4163-9acb-9a88338a4eaa 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.560 388c3e61-92e7-4f58-8459-58a6b80af02a 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.560 [2024-12-06 13:07:06.339891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fa9084b4-f425-4163-9acb-9a88338a4eaa is claimed 00:12:19.560 [2024-12-06 13:07:06.340025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 388c3e61-92e7-4f58-8459-58a6b80af02a is claimed 00:12:19.560 [2024-12-06 13:07:06.340217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:19.560 [2024-12-06 13:07:06.340248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:12:19.560 [2024-12-06 13:07:06.340645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:19.560 [2024-12-06 13:07:06.340921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:19.560 [2024-12-06 13:07:06.340955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:19.560 [2024-12-06 13:07:06.341157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:12:19.560 [2024-12-06 13:07:06.456202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.560 [2024-12-06 13:07:06.500203] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:19.560 [2024-12-06 13:07:06.500247] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fa9084b4-f425-4163-9acb-9a88338a4eaa' was resized: old size 131072, new size 204800 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.560 [2024-12-06 13:07:06.508032] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:19.560 [2024-12-06 13:07:06.508067] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '388c3e61-92e7-4f58-8459-58a6b80af02a' was resized: old size 131072, new size 204800 00:12:19.560 [2024-12-06 13:07:06.508105] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:12:19.560 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.819 [2024-12-06 13:07:06.632272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.819 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.820 [2024-12-06 13:07:06.684018] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:12:19.820 [2024-12-06 13:07:06.684136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:12:19.820 [2024-12-06 13:07:06.684163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.820 [2024-12-06 13:07:06.684181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:12:19.820 [2024-12-06 13:07:06.684341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.820 [2024-12-06 13:07:06.684397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.820 [2024-12-06 13:07:06.684446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.820 [2024-12-06 13:07:06.691864] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:19.820 [2024-12-06 13:07:06.691934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.820 [2024-12-06 13:07:06.691965] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:12:19.820 [2024-12-06 13:07:06.691984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.820 [2024-12-06 13:07:06.694870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.820 [2024-12-06 13:07:06.694925] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:19.820 pt0 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.820 [2024-12-06 13:07:06.697290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fa9084b4-f425-4163-9acb-9a88338a4eaa 00:12:19.820 [2024-12-06 13:07:06.697363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fa9084b4-f425-4163-9acb-9a88338a4eaa is claimed 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.820 [2024-12-06 13:07:06.697535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 388c3e61-92e7-4f58-8459-58a6b80af02a 00:12:19.820 [2024-12-06 13:07:06.697572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 388c3e61-92e7-4f58-8459-58a6b80af02a is claimed 00:12:19.820 [2024-12-06 13:07:06.697748] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 388c3e61-92e7-4f58-8459-58a6b80af02a (2) smaller than existing raid bdev Raid (3) 00:12:19.820 [2024-12-06 13:07:06.697795] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev fa9084b4-f425-4163-9acb-9a88338a4eaa: File exists 00:12:19.820 [2024-12-06 13:07:06.697855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:19.820 [2024-12-06 13:07:06.697875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:12:19.820 [2024-12-06 13:07:06.698205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:19.820 [2024-12-06 13:07:06.698421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:19.820 [2024-12-06 13:07:06.698446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:12:19.820 [2024-12-06 13:07:06.698689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.820 [2024-12-06 13:07:06.716202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60259 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60259 ']' 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60259 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60259 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.820 killing process with pid 60259 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60259' 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60259 00:12:19.820 [2024-12-06 13:07:06.794070] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.820 13:07:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60259 00:12:19.820 [2024-12-06 13:07:06.794201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.820 [2024-12-06 13:07:06.794272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.820 [2024-12-06 13:07:06.794288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:12:21.196 [2024-12-06 13:07:08.116830] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.572 13:07:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:12:22.572 00:12:22.572 real 0m4.796s 00:12:22.572 user 0m5.076s 00:12:22.572 sys 0m0.759s 00:12:22.572 13:07:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.572 13:07:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.572 ************************************ 00:12:22.572 END TEST raid0_resize_superblock_test 00:12:22.572 ************************************ 00:12:22.572 13:07:09 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:12:22.572 13:07:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:22.572 13:07:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.572 13:07:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.572 ************************************ 00:12:22.572 START TEST raid1_resize_superblock_test 00:12:22.572 ************************************ 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60358 00:12:22.572 Process raid pid: 60358 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60358' 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60358 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60358 ']' 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.572 13:07:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.572 [2024-12-06 13:07:09.362007] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:22.572 [2024-12-06 13:07:09.362885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.572 [2024-12-06 13:07:09.560044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.831 [2024-12-06 13:07:09.694898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.115 [2024-12-06 13:07:09.905755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.115 [2024-12-06 13:07:09.905812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.389 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.389 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:23.389 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:12:23.389 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.389 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.956 malloc0 00:12:23.956 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.956 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:23.956 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.956 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.956 [2024-12-06 13:07:10.905614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:23.956 [2024-12-06 13:07:10.905703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.956 [2024-12-06 13:07:10.905742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:23.956 [2024-12-06 13:07:10.905765] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.956 [2024-12-06 13:07:10.908935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.956 [2024-12-06 13:07:10.908990] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:23.956 pt0 00:12:23.956 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.956 13:07:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:12:23.956 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.956 13:07:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.215 ed46014d-0195-4dad-84a0-a8e1db8a0317 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.215 d4ec1856-1ddd-4a33-8876-a27b56190194 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.215 fcbbe5f6-63f9-499e-9da0-964ed6bf3e0f 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.215 [2024-12-06 13:07:11.102806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d4ec1856-1ddd-4a33-8876-a27b56190194 is claimed 00:12:24.215 [2024-12-06 13:07:11.103015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fcbbe5f6-63f9-499e-9da0-964ed6bf3e0f is claimed 00:12:24.215 [2024-12-06 13:07:11.103309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:24.215 [2024-12-06 13:07:11.103352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:12:24.215 [2024-12-06 13:07:11.103799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:24.215 [2024-12-06 13:07:11.104141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:24.215 [2024-12-06 13:07:11.104173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:24.215 [2024-12-06 13:07:11.104437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:24.215 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:12:24.216 [2024-12-06 13:07:11.223196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.474 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.474 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:24.474 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:24.474 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:12:24.474 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:12:24.474 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.474 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.475 [2024-12-06 13:07:11.279354] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:24.475 [2024-12-06 13:07:11.279446] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd4ec1856-1ddd-4a33-8876-a27b56190194' was resized: old size 131072, new size 204800 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.475 [2024-12-06 13:07:11.287129] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:24.475 [2024-12-06 13:07:11.287187] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fcbbe5f6-63f9-499e-9da0-964ed6bf3e0f' was resized: old size 131072, new size 204800 00:12:24.475 [2024-12-06 13:07:11.287254] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:12:24.475 [2024-12-06 13:07:11.407281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.475 [2024-12-06 13:07:11.459035] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:12:24.475 [2024-12-06 13:07:11.459179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:12:24.475 [2024-12-06 13:07:11.459226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:12:24.475 [2024-12-06 13:07:11.459509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.475 [2024-12-06 13:07:11.459869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.475 [2024-12-06 13:07:11.460030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.475 [2024-12-06 13:07:11.460061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.475 [2024-12-06 13:07:11.470874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:24.475 [2024-12-06 13:07:11.471041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.475 [2024-12-06 13:07:11.471090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:12:24.475 [2024-12-06 13:07:11.471117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.475 [2024-12-06 13:07:11.474714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.475 [2024-12-06 13:07:11.474782] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:24.475 pt0 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.475 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.475 [2024-12-06 13:07:11.477694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d4ec1856-1ddd-4a33-8876-a27b56190194 00:12:24.475 [2024-12-06 13:07:11.477811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d4ec1856-1ddd-4a33-8876-a27b56190194 is claimed 00:12:24.475 [2024-12-06 13:07:11.477975] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fcbbe5f6-63f9-499e-9da0-964ed6bf3e0f 00:12:24.475 [2024-12-06 13:07:11.478017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fcbbe5f6-63f9-499e-9da0-964ed6bf3e0f is claimed 00:12:24.475 [2024-12-06 13:07:11.478215] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev fcbbe5f6-63f9-499e-9da0-964ed6bf3e0f (2) smaller than existing raid bdev Raid (3) 00:12:24.475 [2024-12-06 13:07:11.478256] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev d4ec1856-1ddd-4a33-8876-a27b56190194: File exists 00:12:24.475 [2024-12-06 13:07:11.478324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:24.475 [2024-12-06 13:07:11.478358] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:24.475 [2024-12-06 13:07:11.478775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:24.475 [2024-12-06 13:07:11.479103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:24.475 [2024-12-06 13:07:11.479134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:12:24.475 [2024-12-06 13:07:11.479411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.733 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.733 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:24.733 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:24.733 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:24.733 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:12:24.733 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.733 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.734 [2024-12-06 13:07:11.499727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60358 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60358 ']' 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60358 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60358 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.734 killing process with pid 60358 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60358' 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60358 00:12:24.734 [2024-12-06 13:07:11.579374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.734 13:07:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60358 00:12:24.734 [2024-12-06 13:07:11.579534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.734 [2024-12-06 13:07:11.579635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.734 [2024-12-06 13:07:11.579655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:12:26.171 [2024-12-06 13:07:12.953416] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.545 13:07:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:12:27.545 00:12:27.545 real 0m4.857s 00:12:27.545 user 0m5.132s 00:12:27.545 sys 0m0.695s 00:12:27.545 13:07:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.545 13:07:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.545 ************************************ 00:12:27.545 END TEST raid1_resize_superblock_test 00:12:27.545 ************************************ 00:12:27.545 13:07:14 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:12:27.545 13:07:14 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:12:27.545 13:07:14 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:12:27.545 13:07:14 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:12:27.545 13:07:14 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:12:27.545 13:07:14 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:12:27.545 13:07:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:27.545 13:07:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.545 13:07:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.545 ************************************ 00:12:27.545 START TEST raid_function_test_raid0 00:12:27.545 ************************************ 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60466 00:12:27.545 Process raid pid: 60466 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60466' 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60466 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60466 ']' 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.545 13:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.546 13:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.546 13:07:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:27.546 [2024-12-06 13:07:14.314725] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:27.546 [2024-12-06 13:07:14.315735] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.546 [2024-12-06 13:07:14.518688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.804 [2024-12-06 13:07:14.675123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.061 [2024-12-06 13:07:14.910072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.061 [2024-12-06 13:07:14.910151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:28.650 Base_1 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:28.650 Base_2 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:28.650 [2024-12-06 13:07:15.467232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:28.650 [2024-12-06 13:07:15.470097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:28.650 [2024-12-06 13:07:15.470211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:28.650 [2024-12-06 13:07:15.470236] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:28.650 [2024-12-06 13:07:15.470678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:28.650 [2024-12-06 13:07:15.470924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:28.650 [2024-12-06 13:07:15.470953] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:12:28.650 [2024-12-06 13:07:15.471207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:28.650 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:12:28.908 [2024-12-06 13:07:15.823432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:28.908 /dev/nbd0 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:28.908 1+0 records in 00:12:28.908 1+0 records out 00:12:28.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311697 s, 13.1 MB/s 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:28.908 13:07:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:29.474 { 00:12:29.474 "nbd_device": "/dev/nbd0", 00:12:29.474 "bdev_name": "raid" 00:12:29.474 } 00:12:29.474 ]' 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:29.474 { 00:12:29.474 "nbd_device": "/dev/nbd0", 00:12:29.474 "bdev_name": "raid" 00:12:29.474 } 00:12:29.474 ]' 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:12:29.474 4096+0 records in 00:12:29.474 4096+0 records out 00:12:29.474 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.03369 s, 62.2 MB/s 00:12:29.474 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:29.733 4096+0 records in 00:12:29.733 4096+0 records out 00:12:29.733 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.408825 s, 5.1 MB/s 00:12:29.733 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:12:29.733 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:29.992 128+0 records in 00:12:29.992 128+0 records out 00:12:29.992 65536 bytes (66 kB, 64 KiB) copied, 0.000519507 s, 126 MB/s 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:12:29.992 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:29.992 2035+0 records in 00:12:29.992 2035+0 records out 00:12:29.993 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0131711 s, 79.1 MB/s 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:29.993 456+0 records in 00:12:29.993 456+0 records out 00:12:29.993 233472 bytes (233 kB, 228 KiB) copied, 0.00243332 s, 95.9 MB/s 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.993 13:07:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:30.250 [2024-12-06 13:07:17.182401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.250 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.250 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.250 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.250 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.250 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.250 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.250 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:12:30.250 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.250 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:12:30.250 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.251 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:30.509 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:30.509 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:30.509 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:30.509 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:30.509 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:30.509 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:30.509 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:12:30.509 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:12:30.509 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60466 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60466 ']' 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60466 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60466 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.767 killing process with pid 60466 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60466' 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60466 00:12:30.767 [2024-12-06 13:07:17.562798] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.767 13:07:17 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60466 00:12:30.767 [2024-12-06 13:07:17.562988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.767 [2024-12-06 13:07:17.563078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.767 [2024-12-06 13:07:17.563109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:12:30.767 [2024-12-06 13:07:17.760451] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.143 13:07:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:12:32.143 00:12:32.143 real 0m4.697s 00:12:32.143 user 0m5.696s 00:12:32.143 sys 0m1.189s 00:12:32.143 13:07:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.143 13:07:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:32.143 ************************************ 00:12:32.143 END TEST raid_function_test_raid0 00:12:32.143 ************************************ 00:12:32.143 13:07:18 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:12:32.143 13:07:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.143 13:07:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.143 13:07:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.143 ************************************ 00:12:32.143 START TEST raid_function_test_concat 00:12:32.143 ************************************ 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:12:32.143 Process raid pid: 60606 00:12:32.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60606 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60606' 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60606 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60606 ']' 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.143 13:07:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:32.143 [2024-12-06 13:07:19.046167] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:32.143 [2024-12-06 13:07:19.046836] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.402 [2024-12-06 13:07:19.225125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.402 [2024-12-06 13:07:19.375759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.662 [2024-12-06 13:07:19.610521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.662 [2024-12-06 13:07:19.610628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:33.232 Base_1 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:33.232 Base_2 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:33.232 [2024-12-06 13:07:20.171022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:33.232 [2024-12-06 13:07:20.173939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:33.232 [2024-12-06 13:07:20.174082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:33.232 [2024-12-06 13:07:20.174108] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:33.232 [2024-12-06 13:07:20.174562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:33.232 [2024-12-06 13:07:20.174842] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:33.232 [2024-12-06 13:07:20.174862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:12:33.232 [2024-12-06 13:07:20.175197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.232 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:12:33.491 [2024-12-06 13:07:20.479353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:33.491 /dev/nbd0 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.750 1+0 records in 00:12:33.750 1+0 records out 00:12:33.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027449 s, 14.9 MB/s 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:33.750 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:34.008 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:34.009 { 00:12:34.009 "nbd_device": "/dev/nbd0", 00:12:34.009 "bdev_name": "raid" 00:12:34.009 } 00:12:34.009 ]' 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:34.009 { 00:12:34.009 "nbd_device": "/dev/nbd0", 00:12:34.009 "bdev_name": "raid" 00:12:34.009 } 00:12:34.009 ]' 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:12:34.009 13:07:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:12:34.266 4096+0 records in 00:12:34.266 4096+0 records out 00:12:34.266 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0454804 s, 46.1 MB/s 00:12:34.266 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:34.523 4096+0 records in 00:12:34.523 4096+0 records out 00:12:34.523 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.394154 s, 5.3 MB/s 00:12:34.523 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:12:34.523 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:34.523 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:12:34.523 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:34.523 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:12:34.523 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:12:34.523 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:34.523 128+0 records in 00:12:34.523 128+0 records out 00:12:34.523 65536 bytes (66 kB, 64 KiB) copied, 0.000895321 s, 73.2 MB/s 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:34.524 2035+0 records in 00:12:34.524 2035+0 records out 00:12:34.524 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00832892 s, 125 MB/s 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:34.524 456+0 records in 00:12:34.524 456+0 records out 00:12:34.524 233472 bytes (233 kB, 228 KiB) copied, 0.00243731 s, 95.8 MB/s 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:34.524 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:34.782 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:34.782 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:34.782 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:12:34.782 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:34.782 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.782 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:34.782 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:34.782 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:12:34.782 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.782 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:35.040 [2024-12-06 13:07:21.872381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.040 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.040 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.040 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.040 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.040 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.040 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.040 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:12:35.040 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.040 13:07:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:12:35.040 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.040 13:07:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:35.297 13:07:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:35.297 13:07:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:35.297 13:07:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:35.297 13:07:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:35.297 13:07:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:35.297 13:07:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:35.297 13:07:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60606 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60606 ']' 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60606 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60606 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.298 killing process with pid 60606 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60606' 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60606 00:12:35.298 [2024-12-06 13:07:22.252983] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.298 13:07:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60606 00:12:35.298 [2024-12-06 13:07:22.253149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.298 [2024-12-06 13:07:22.253239] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.298 [2024-12-06 13:07:22.253273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:12:35.555 [2024-12-06 13:07:22.458654] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.930 13:07:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:12:36.930 00:12:36.930 real 0m4.705s 00:12:36.930 user 0m5.664s 00:12:36.930 sys 0m1.165s 00:12:36.930 13:07:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.930 13:07:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:36.930 ************************************ 00:12:36.930 END TEST raid_function_test_concat 00:12:36.930 ************************************ 00:12:36.930 13:07:23 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:12:36.930 13:07:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:36.930 13:07:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.930 13:07:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.930 ************************************ 00:12:36.930 START TEST raid0_resize_test 00:12:36.930 ************************************ 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60735 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:36.930 Process raid pid: 60735 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60735' 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60735 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60735 ']' 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.930 13:07:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.930 [2024-12-06 13:07:23.835855] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:36.930 [2024-12-06 13:07:23.836114] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.188 [2024-12-06 13:07:24.029565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.188 [2024-12-06 13:07:24.179723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.446 [2024-12-06 13:07:24.409886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.446 [2024-12-06 13:07:24.409980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.012 Base_1 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.012 Base_2 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.012 [2024-12-06 13:07:24.877108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:38.012 [2024-12-06 13:07:24.879781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:38.012 [2024-12-06 13:07:24.879902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:38.012 [2024-12-06 13:07:24.879926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:38.012 [2024-12-06 13:07:24.880311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:38.012 [2024-12-06 13:07:24.880537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:38.012 [2024-12-06 13:07:24.880568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:38.012 [2024-12-06 13:07:24.880751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.012 [2024-12-06 13:07:24.885100] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:38.012 [2024-12-06 13:07:24.885162] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:38.012 true 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.012 [2024-12-06 13:07:24.897394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.012 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.012 [2024-12-06 13:07:24.949231] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:38.012 [2024-12-06 13:07:24.949289] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:38.013 [2024-12-06 13:07:24.949344] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:12:38.013 true 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.013 [2024-12-06 13:07:24.961455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60735 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60735 ']' 00:12:38.013 13:07:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60735 00:12:38.013 13:07:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:12:38.013 13:07:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.013 13:07:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60735 00:12:38.270 13:07:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.270 13:07:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.270 13:07:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60735' 00:12:38.270 killing process with pid 60735 00:12:38.270 13:07:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60735 00:12:38.270 [2024-12-06 13:07:25.028457] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.270 13:07:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60735 00:12:38.270 [2024-12-06 13:07:25.028627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.270 [2024-12-06 13:07:25.028717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.270 [2024-12-06 13:07:25.028737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:38.270 [2024-12-06 13:07:25.046129] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.203 13:07:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:12:39.203 00:12:39.204 real 0m2.515s 00:12:39.204 user 0m2.693s 00:12:39.204 sys 0m0.506s 00:12:39.204 ************************************ 00:12:39.204 END TEST raid0_resize_test 00:12:39.204 ************************************ 00:12:39.204 13:07:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.204 13:07:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.462 13:07:26 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:12:39.462 13:07:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.462 13:07:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.462 13:07:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.462 ************************************ 00:12:39.462 START TEST raid1_resize_test 00:12:39.462 ************************************ 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60797 00:12:39.462 Process raid pid: 60797 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60797' 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60797 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60797 ']' 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.462 13:07:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.462 [2024-12-06 13:07:26.391223] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:39.462 [2024-12-06 13:07:26.391440] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.720 [2024-12-06 13:07:26.585658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.980 [2024-12-06 13:07:26.737511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.980 [2024-12-06 13:07:26.971751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.980 [2024-12-06 13:07:26.971831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.548 Base_1 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.548 Base_2 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.548 [2024-12-06 13:07:27.489313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:40.548 [2024-12-06 13:07:27.492051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:40.548 [2024-12-06 13:07:27.492156] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:40.548 [2024-12-06 13:07:27.492179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:40.548 [2024-12-06 13:07:27.492572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:40.548 [2024-12-06 13:07:27.492777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:40.548 [2024-12-06 13:07:27.492806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:40.548 [2024-12-06 13:07:27.492993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.548 [2024-12-06 13:07:27.497278] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:40.548 [2024-12-06 13:07:27.497339] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:40.548 true 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.548 [2024-12-06 13:07:27.509512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.548 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.548 [2024-12-06 13:07:27.561404] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:40.548 [2024-12-06 13:07:27.561453] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:40.548 [2024-12-06 13:07:27.561533] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:12:40.807 true 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.807 [2024-12-06 13:07:27.573625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60797 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60797 ']' 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60797 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60797 00:12:40.807 killing process with pid 60797 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60797' 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60797 00:12:40.807 [2024-12-06 13:07:27.648870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.807 13:07:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60797 00:12:40.807 [2024-12-06 13:07:27.649056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.807 [2024-12-06 13:07:27.649851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.807 [2024-12-06 13:07:27.649887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:40.807 [2024-12-06 13:07:27.666780] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.183 ************************************ 00:12:42.183 END TEST raid1_resize_test 00:12:42.183 ************************************ 00:12:42.183 13:07:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:12:42.183 00:12:42.183 real 0m2.571s 00:12:42.183 user 0m2.845s 00:12:42.183 sys 0m0.449s 00:12:42.183 13:07:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.184 13:07:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.184 13:07:28 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:42.184 13:07:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:42.184 13:07:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:12:42.184 13:07:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:42.184 13:07:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.184 13:07:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.184 ************************************ 00:12:42.184 START TEST raid_state_function_test 00:12:42.184 ************************************ 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:42.184 Process raid pid: 60859 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60859 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60859' 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60859 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60859 ']' 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.184 13:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.184 [2024-12-06 13:07:29.025334] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:42.184 [2024-12-06 13:07:29.025827] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.443 [2024-12-06 13:07:29.214101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.443 [2024-12-06 13:07:29.388122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.702 [2024-12-06 13:07:29.630536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.702 [2024-12-06 13:07:29.630810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.269 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.269 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.270 [2024-12-06 13:07:30.023462] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.270 [2024-12-06 13:07:30.023553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.270 [2024-12-06 13:07:30.023575] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.270 [2024-12-06 13:07:30.023592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.270 "name": "Existed_Raid", 00:12:43.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.270 "strip_size_kb": 64, 00:12:43.270 "state": "configuring", 00:12:43.270 "raid_level": "raid0", 00:12:43.270 "superblock": false, 00:12:43.270 "num_base_bdevs": 2, 00:12:43.270 "num_base_bdevs_discovered": 0, 00:12:43.270 "num_base_bdevs_operational": 2, 00:12:43.270 "base_bdevs_list": [ 00:12:43.270 { 00:12:43.270 "name": "BaseBdev1", 00:12:43.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.270 "is_configured": false, 00:12:43.270 "data_offset": 0, 00:12:43.270 "data_size": 0 00:12:43.270 }, 00:12:43.270 { 00:12:43.270 "name": "BaseBdev2", 00:12:43.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.270 "is_configured": false, 00:12:43.270 "data_offset": 0, 00:12:43.270 "data_size": 0 00:12:43.270 } 00:12:43.270 ] 00:12:43.270 }' 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.270 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.837 [2024-12-06 13:07:30.563928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.837 [2024-12-06 13:07:30.564016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.837 [2024-12-06 13:07:30.575798] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.837 [2024-12-06 13:07:30.575926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.837 [2024-12-06 13:07:30.575969] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.837 [2024-12-06 13:07:30.576018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.837 [2024-12-06 13:07:30.642195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.837 BaseBdev1 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.837 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.837 [ 00:12:43.837 { 00:12:43.837 "name": "BaseBdev1", 00:12:43.837 "aliases": [ 00:12:43.837 "67c5ac20-ebd3-4d0d-9b32-0e273e3fa368" 00:12:43.837 ], 00:12:43.837 "product_name": "Malloc disk", 00:12:43.837 "block_size": 512, 00:12:43.837 "num_blocks": 65536, 00:12:43.837 "uuid": "67c5ac20-ebd3-4d0d-9b32-0e273e3fa368", 00:12:43.837 "assigned_rate_limits": { 00:12:43.837 "rw_ios_per_sec": 0, 00:12:43.837 "rw_mbytes_per_sec": 0, 00:12:43.837 "r_mbytes_per_sec": 0, 00:12:43.837 "w_mbytes_per_sec": 0 00:12:43.837 }, 00:12:43.837 "claimed": true, 00:12:43.837 "claim_type": "exclusive_write", 00:12:43.837 "zoned": false, 00:12:43.837 "supported_io_types": { 00:12:43.837 "read": true, 00:12:43.837 "write": true, 00:12:43.837 "unmap": true, 00:12:43.837 "flush": true, 00:12:43.837 "reset": true, 00:12:43.837 "nvme_admin": false, 00:12:43.837 "nvme_io": false, 00:12:43.837 "nvme_io_md": false, 00:12:43.837 "write_zeroes": true, 00:12:43.837 "zcopy": true, 00:12:43.837 "get_zone_info": false, 00:12:43.837 "zone_management": false, 00:12:43.837 "zone_append": false, 00:12:43.837 "compare": false, 00:12:43.837 "compare_and_write": false, 00:12:43.837 "abort": true, 00:12:43.837 "seek_hole": false, 00:12:43.837 "seek_data": false, 00:12:43.837 "copy": true, 00:12:43.837 "nvme_iov_md": false 00:12:43.837 }, 00:12:43.837 "memory_domains": [ 00:12:43.837 { 00:12:43.837 "dma_device_id": "system", 00:12:43.837 "dma_device_type": 1 00:12:43.837 }, 00:12:43.837 { 00:12:43.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.837 "dma_device_type": 2 00:12:43.838 } 00:12:43.838 ], 00:12:43.838 "driver_specific": {} 00:12:43.838 } 00:12:43.838 ] 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.838 "name": "Existed_Raid", 00:12:43.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.838 "strip_size_kb": 64, 00:12:43.838 "state": "configuring", 00:12:43.838 "raid_level": "raid0", 00:12:43.838 "superblock": false, 00:12:43.838 "num_base_bdevs": 2, 00:12:43.838 "num_base_bdevs_discovered": 1, 00:12:43.838 "num_base_bdevs_operational": 2, 00:12:43.838 "base_bdevs_list": [ 00:12:43.838 { 00:12:43.838 "name": "BaseBdev1", 00:12:43.838 "uuid": "67c5ac20-ebd3-4d0d-9b32-0e273e3fa368", 00:12:43.838 "is_configured": true, 00:12:43.838 "data_offset": 0, 00:12:43.838 "data_size": 65536 00:12:43.838 }, 00:12:43.838 { 00:12:43.838 "name": "BaseBdev2", 00:12:43.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.838 "is_configured": false, 00:12:43.838 "data_offset": 0, 00:12:43.838 "data_size": 0 00:12:43.838 } 00:12:43.838 ] 00:12:43.838 }' 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.838 13:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.405 [2024-12-06 13:07:31.210339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.405 [2024-12-06 13:07:31.210811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.405 [2024-12-06 13:07:31.218377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.405 [2024-12-06 13:07:31.221207] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.405 [2024-12-06 13:07:31.221267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.405 "name": "Existed_Raid", 00:12:44.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.405 "strip_size_kb": 64, 00:12:44.405 "state": "configuring", 00:12:44.405 "raid_level": "raid0", 00:12:44.405 "superblock": false, 00:12:44.405 "num_base_bdevs": 2, 00:12:44.405 "num_base_bdevs_discovered": 1, 00:12:44.405 "num_base_bdevs_operational": 2, 00:12:44.405 "base_bdevs_list": [ 00:12:44.405 { 00:12:44.405 "name": "BaseBdev1", 00:12:44.405 "uuid": "67c5ac20-ebd3-4d0d-9b32-0e273e3fa368", 00:12:44.405 "is_configured": true, 00:12:44.405 "data_offset": 0, 00:12:44.405 "data_size": 65536 00:12:44.405 }, 00:12:44.405 { 00:12:44.405 "name": "BaseBdev2", 00:12:44.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.405 "is_configured": false, 00:12:44.405 "data_offset": 0, 00:12:44.405 "data_size": 0 00:12:44.405 } 00:12:44.405 ] 00:12:44.405 }' 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.405 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.987 [2024-12-06 13:07:31.790838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.987 [2024-12-06 13:07:31.790907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:44.987 [2024-12-06 13:07:31.790923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:44.987 [2024-12-06 13:07:31.791286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:44.987 [2024-12-06 13:07:31.791571] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:44.987 [2024-12-06 13:07:31.791594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:44.987 [2024-12-06 13:07:31.792005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.987 BaseBdev2 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.987 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.987 [ 00:12:44.987 { 00:12:44.987 "name": "BaseBdev2", 00:12:44.987 "aliases": [ 00:12:44.987 "674c65f3-9872-47a6-92f5-a64d12a7a65b" 00:12:44.987 ], 00:12:44.987 "product_name": "Malloc disk", 00:12:44.987 "block_size": 512, 00:12:44.987 "num_blocks": 65536, 00:12:44.987 "uuid": "674c65f3-9872-47a6-92f5-a64d12a7a65b", 00:12:44.987 "assigned_rate_limits": { 00:12:44.987 "rw_ios_per_sec": 0, 00:12:44.987 "rw_mbytes_per_sec": 0, 00:12:44.987 "r_mbytes_per_sec": 0, 00:12:44.987 "w_mbytes_per_sec": 0 00:12:44.987 }, 00:12:44.987 "claimed": true, 00:12:44.987 "claim_type": "exclusive_write", 00:12:44.987 "zoned": false, 00:12:44.987 "supported_io_types": { 00:12:44.987 "read": true, 00:12:44.988 "write": true, 00:12:44.988 "unmap": true, 00:12:44.988 "flush": true, 00:12:44.988 "reset": true, 00:12:44.988 "nvme_admin": false, 00:12:44.988 "nvme_io": false, 00:12:44.988 "nvme_io_md": false, 00:12:44.988 "write_zeroes": true, 00:12:44.988 "zcopy": true, 00:12:44.988 "get_zone_info": false, 00:12:44.988 "zone_management": false, 00:12:44.988 "zone_append": false, 00:12:44.988 "compare": false, 00:12:44.988 "compare_and_write": false, 00:12:44.988 "abort": true, 00:12:44.988 "seek_hole": false, 00:12:44.988 "seek_data": false, 00:12:44.988 "copy": true, 00:12:44.988 "nvme_iov_md": false 00:12:44.988 }, 00:12:44.988 "memory_domains": [ 00:12:44.988 { 00:12:44.988 "dma_device_id": "system", 00:12:44.988 "dma_device_type": 1 00:12:44.988 }, 00:12:44.988 { 00:12:44.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.988 "dma_device_type": 2 00:12:44.988 } 00:12:44.988 ], 00:12:44.988 "driver_specific": {} 00:12:44.988 } 00:12:44.988 ] 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.988 "name": "Existed_Raid", 00:12:44.988 "uuid": "a4ee68e6-9d5a-4a9c-8b9d-88218d919fe0", 00:12:44.988 "strip_size_kb": 64, 00:12:44.988 "state": "online", 00:12:44.988 "raid_level": "raid0", 00:12:44.988 "superblock": false, 00:12:44.988 "num_base_bdevs": 2, 00:12:44.988 "num_base_bdevs_discovered": 2, 00:12:44.988 "num_base_bdevs_operational": 2, 00:12:44.988 "base_bdevs_list": [ 00:12:44.988 { 00:12:44.988 "name": "BaseBdev1", 00:12:44.988 "uuid": "67c5ac20-ebd3-4d0d-9b32-0e273e3fa368", 00:12:44.988 "is_configured": true, 00:12:44.988 "data_offset": 0, 00:12:44.988 "data_size": 65536 00:12:44.988 }, 00:12:44.988 { 00:12:44.988 "name": "BaseBdev2", 00:12:44.988 "uuid": "674c65f3-9872-47a6-92f5-a64d12a7a65b", 00:12:44.988 "is_configured": true, 00:12:44.988 "data_offset": 0, 00:12:44.988 "data_size": 65536 00:12:44.988 } 00:12:44.988 ] 00:12:44.988 }' 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.988 13:07:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.555 [2024-12-06 13:07:32.399405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.555 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.555 "name": "Existed_Raid", 00:12:45.555 "aliases": [ 00:12:45.555 "a4ee68e6-9d5a-4a9c-8b9d-88218d919fe0" 00:12:45.555 ], 00:12:45.555 "product_name": "Raid Volume", 00:12:45.555 "block_size": 512, 00:12:45.555 "num_blocks": 131072, 00:12:45.555 "uuid": "a4ee68e6-9d5a-4a9c-8b9d-88218d919fe0", 00:12:45.555 "assigned_rate_limits": { 00:12:45.555 "rw_ios_per_sec": 0, 00:12:45.555 "rw_mbytes_per_sec": 0, 00:12:45.555 "r_mbytes_per_sec": 0, 00:12:45.555 "w_mbytes_per_sec": 0 00:12:45.555 }, 00:12:45.555 "claimed": false, 00:12:45.555 "zoned": false, 00:12:45.555 "supported_io_types": { 00:12:45.555 "read": true, 00:12:45.555 "write": true, 00:12:45.555 "unmap": true, 00:12:45.555 "flush": true, 00:12:45.555 "reset": true, 00:12:45.556 "nvme_admin": false, 00:12:45.556 "nvme_io": false, 00:12:45.556 "nvme_io_md": false, 00:12:45.556 "write_zeroes": true, 00:12:45.556 "zcopy": false, 00:12:45.556 "get_zone_info": false, 00:12:45.556 "zone_management": false, 00:12:45.556 "zone_append": false, 00:12:45.556 "compare": false, 00:12:45.556 "compare_and_write": false, 00:12:45.556 "abort": false, 00:12:45.556 "seek_hole": false, 00:12:45.556 "seek_data": false, 00:12:45.556 "copy": false, 00:12:45.556 "nvme_iov_md": false 00:12:45.556 }, 00:12:45.556 "memory_domains": [ 00:12:45.556 { 00:12:45.556 "dma_device_id": "system", 00:12:45.556 "dma_device_type": 1 00:12:45.556 }, 00:12:45.556 { 00:12:45.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.556 "dma_device_type": 2 00:12:45.556 }, 00:12:45.556 { 00:12:45.556 "dma_device_id": "system", 00:12:45.556 "dma_device_type": 1 00:12:45.556 }, 00:12:45.556 { 00:12:45.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.556 "dma_device_type": 2 00:12:45.556 } 00:12:45.556 ], 00:12:45.556 "driver_specific": { 00:12:45.556 "raid": { 00:12:45.556 "uuid": "a4ee68e6-9d5a-4a9c-8b9d-88218d919fe0", 00:12:45.556 "strip_size_kb": 64, 00:12:45.556 "state": "online", 00:12:45.556 "raid_level": "raid0", 00:12:45.556 "superblock": false, 00:12:45.556 "num_base_bdevs": 2, 00:12:45.556 "num_base_bdevs_discovered": 2, 00:12:45.556 "num_base_bdevs_operational": 2, 00:12:45.556 "base_bdevs_list": [ 00:12:45.556 { 00:12:45.556 "name": "BaseBdev1", 00:12:45.556 "uuid": "67c5ac20-ebd3-4d0d-9b32-0e273e3fa368", 00:12:45.556 "is_configured": true, 00:12:45.556 "data_offset": 0, 00:12:45.556 "data_size": 65536 00:12:45.556 }, 00:12:45.556 { 00:12:45.556 "name": "BaseBdev2", 00:12:45.556 "uuid": "674c65f3-9872-47a6-92f5-a64d12a7a65b", 00:12:45.556 "is_configured": true, 00:12:45.556 "data_offset": 0, 00:12:45.556 "data_size": 65536 00:12:45.556 } 00:12:45.556 ] 00:12:45.556 } 00:12:45.556 } 00:12:45.556 }' 00:12:45.556 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.556 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:45.556 BaseBdev2' 00:12:45.556 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.556 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.556 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.556 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:45.556 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.556 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.556 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.556 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.815 [2024-12-06 13:07:32.671215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.815 [2024-12-06 13:07:32.671265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.815 [2024-12-06 13:07:32.671346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.815 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.815 "name": "Existed_Raid", 00:12:45.815 "uuid": "a4ee68e6-9d5a-4a9c-8b9d-88218d919fe0", 00:12:45.815 "strip_size_kb": 64, 00:12:45.815 "state": "offline", 00:12:45.815 "raid_level": "raid0", 00:12:45.815 "superblock": false, 00:12:45.815 "num_base_bdevs": 2, 00:12:45.815 "num_base_bdevs_discovered": 1, 00:12:45.815 "num_base_bdevs_operational": 1, 00:12:45.815 "base_bdevs_list": [ 00:12:45.815 { 00:12:45.815 "name": null, 00:12:45.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.816 "is_configured": false, 00:12:45.816 "data_offset": 0, 00:12:45.816 "data_size": 65536 00:12:45.816 }, 00:12:45.816 { 00:12:45.816 "name": "BaseBdev2", 00:12:45.816 "uuid": "674c65f3-9872-47a6-92f5-a64d12a7a65b", 00:12:45.816 "is_configured": true, 00:12:45.816 "data_offset": 0, 00:12:45.816 "data_size": 65536 00:12:45.816 } 00:12:45.816 ] 00:12:45.816 }' 00:12:45.816 13:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.816 13:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.382 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:46.382 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.383 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.383 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.383 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.383 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.383 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.383 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.383 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.383 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:46.383 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.383 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.383 [2024-12-06 13:07:33.339312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.383 [2024-12-06 13:07:33.339390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:46.641 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.641 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.641 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.641 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.641 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:46.641 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.641 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.641 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.641 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:46.641 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60859 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60859 ']' 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60859 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60859 00:12:46.642 killing process with pid 60859 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60859' 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60859 00:12:46.642 [2024-12-06 13:07:33.516289] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.642 13:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60859 00:12:46.642 [2024-12-06 13:07:33.531828] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:48.017 00:12:48.017 real 0m5.782s 00:12:48.017 user 0m8.625s 00:12:48.017 sys 0m0.851s 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.017 ************************************ 00:12:48.017 END TEST raid_state_function_test 00:12:48.017 ************************************ 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.017 13:07:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:12:48.017 13:07:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:48.017 13:07:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.017 13:07:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:48.017 ************************************ 00:12:48.017 START TEST raid_state_function_test_sb 00:12:48.017 ************************************ 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:48.017 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61118 00:12:48.018 Process raid pid: 61118 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61118' 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61118 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61118 ']' 00:12:48.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.018 13:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.018 [2024-12-06 13:07:34.855632] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:48.018 [2024-12-06 13:07:34.855955] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.275 [2024-12-06 13:07:35.034553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.275 [2024-12-06 13:07:35.185585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.533 [2024-12-06 13:07:35.417416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.533 [2024-12-06 13:07:35.417488] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.099 [2024-12-06 13:07:35.889161] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.099 [2024-12-06 13:07:35.889241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.099 [2024-12-06 13:07:35.889261] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.099 [2024-12-06 13:07:35.889278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.099 "name": "Existed_Raid", 00:12:49.099 "uuid": "723f3968-1563-4fa0-ae84-0efd6e9a9339", 00:12:49.099 "strip_size_kb": 64, 00:12:49.099 "state": "configuring", 00:12:49.099 "raid_level": "raid0", 00:12:49.099 "superblock": true, 00:12:49.099 "num_base_bdevs": 2, 00:12:49.099 "num_base_bdevs_discovered": 0, 00:12:49.099 "num_base_bdevs_operational": 2, 00:12:49.099 "base_bdevs_list": [ 00:12:49.099 { 00:12:49.099 "name": "BaseBdev1", 00:12:49.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.099 "is_configured": false, 00:12:49.099 "data_offset": 0, 00:12:49.099 "data_size": 0 00:12:49.099 }, 00:12:49.099 { 00:12:49.099 "name": "BaseBdev2", 00:12:49.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.099 "is_configured": false, 00:12:49.099 "data_offset": 0, 00:12:49.099 "data_size": 0 00:12:49.099 } 00:12:49.099 ] 00:12:49.099 }' 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.099 13:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.664 [2024-12-06 13:07:36.445262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.664 [2024-12-06 13:07:36.445337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.664 [2024-12-06 13:07:36.453202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.664 [2024-12-06 13:07:36.453258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.664 [2024-12-06 13:07:36.453291] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.664 [2024-12-06 13:07:36.453311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.664 [2024-12-06 13:07:36.502794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.664 BaseBdev1 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.664 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.665 [ 00:12:49.665 { 00:12:49.665 "name": "BaseBdev1", 00:12:49.665 "aliases": [ 00:12:49.665 "39aeabe1-daca-44dc-b0e7-68708f1b45f8" 00:12:49.665 ], 00:12:49.665 "product_name": "Malloc disk", 00:12:49.665 "block_size": 512, 00:12:49.665 "num_blocks": 65536, 00:12:49.665 "uuid": "39aeabe1-daca-44dc-b0e7-68708f1b45f8", 00:12:49.665 "assigned_rate_limits": { 00:12:49.665 "rw_ios_per_sec": 0, 00:12:49.665 "rw_mbytes_per_sec": 0, 00:12:49.665 "r_mbytes_per_sec": 0, 00:12:49.665 "w_mbytes_per_sec": 0 00:12:49.665 }, 00:12:49.665 "claimed": true, 00:12:49.665 "claim_type": "exclusive_write", 00:12:49.665 "zoned": false, 00:12:49.665 "supported_io_types": { 00:12:49.665 "read": true, 00:12:49.665 "write": true, 00:12:49.665 "unmap": true, 00:12:49.665 "flush": true, 00:12:49.665 "reset": true, 00:12:49.665 "nvme_admin": false, 00:12:49.665 "nvme_io": false, 00:12:49.665 "nvme_io_md": false, 00:12:49.665 "write_zeroes": true, 00:12:49.665 "zcopy": true, 00:12:49.665 "get_zone_info": false, 00:12:49.665 "zone_management": false, 00:12:49.665 "zone_append": false, 00:12:49.665 "compare": false, 00:12:49.665 "compare_and_write": false, 00:12:49.665 "abort": true, 00:12:49.665 "seek_hole": false, 00:12:49.665 "seek_data": false, 00:12:49.665 "copy": true, 00:12:49.665 "nvme_iov_md": false 00:12:49.665 }, 00:12:49.665 "memory_domains": [ 00:12:49.665 { 00:12:49.665 "dma_device_id": "system", 00:12:49.665 "dma_device_type": 1 00:12:49.665 }, 00:12:49.665 { 00:12:49.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.665 "dma_device_type": 2 00:12:49.665 } 00:12:49.665 ], 00:12:49.665 "driver_specific": {} 00:12:49.665 } 00:12:49.665 ] 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.665 "name": "Existed_Raid", 00:12:49.665 "uuid": "808ead32-1746-4d00-839d-6dc259ebe912", 00:12:49.665 "strip_size_kb": 64, 00:12:49.665 "state": "configuring", 00:12:49.665 "raid_level": "raid0", 00:12:49.665 "superblock": true, 00:12:49.665 "num_base_bdevs": 2, 00:12:49.665 "num_base_bdevs_discovered": 1, 00:12:49.665 "num_base_bdevs_operational": 2, 00:12:49.665 "base_bdevs_list": [ 00:12:49.665 { 00:12:49.665 "name": "BaseBdev1", 00:12:49.665 "uuid": "39aeabe1-daca-44dc-b0e7-68708f1b45f8", 00:12:49.665 "is_configured": true, 00:12:49.665 "data_offset": 2048, 00:12:49.665 "data_size": 63488 00:12:49.665 }, 00:12:49.665 { 00:12:49.665 "name": "BaseBdev2", 00:12:49.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.665 "is_configured": false, 00:12:49.665 "data_offset": 0, 00:12:49.665 "data_size": 0 00:12:49.665 } 00:12:49.665 ] 00:12:49.665 }' 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.665 13:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.233 [2024-12-06 13:07:37.067119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:50.233 [2024-12-06 13:07:37.067210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.233 [2024-12-06 13:07:37.075207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.233 [2024-12-06 13:07:37.080123] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:50.233 [2024-12-06 13:07:37.080199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.233 "name": "Existed_Raid", 00:12:50.233 "uuid": "f54aa04e-03f3-4286-93f9-66331e222bc9", 00:12:50.233 "strip_size_kb": 64, 00:12:50.233 "state": "configuring", 00:12:50.233 "raid_level": "raid0", 00:12:50.233 "superblock": true, 00:12:50.233 "num_base_bdevs": 2, 00:12:50.233 "num_base_bdevs_discovered": 1, 00:12:50.233 "num_base_bdevs_operational": 2, 00:12:50.233 "base_bdevs_list": [ 00:12:50.233 { 00:12:50.233 "name": "BaseBdev1", 00:12:50.233 "uuid": "39aeabe1-daca-44dc-b0e7-68708f1b45f8", 00:12:50.233 "is_configured": true, 00:12:50.233 "data_offset": 2048, 00:12:50.233 "data_size": 63488 00:12:50.233 }, 00:12:50.233 { 00:12:50.233 "name": "BaseBdev2", 00:12:50.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.233 "is_configured": false, 00:12:50.233 "data_offset": 0, 00:12:50.233 "data_size": 0 00:12:50.233 } 00:12:50.233 ] 00:12:50.233 }' 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.233 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.800 [2024-12-06 13:07:37.636961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.800 [2024-12-06 13:07:37.637308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:50.800 [2024-12-06 13:07:37.637329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:50.800 [2024-12-06 13:07:37.637758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:50.800 BaseBdev2 00:12:50.800 [2024-12-06 13:07:37.637965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:50.800 [2024-12-06 13:07:37.637989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:50.800 [2024-12-06 13:07:37.638162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.800 [ 00:12:50.800 { 00:12:50.800 "name": "BaseBdev2", 00:12:50.800 "aliases": [ 00:12:50.800 "f1e09bff-5ba3-4f0e-85fe-4629ff98a5e7" 00:12:50.800 ], 00:12:50.800 "product_name": "Malloc disk", 00:12:50.800 "block_size": 512, 00:12:50.800 "num_blocks": 65536, 00:12:50.800 "uuid": "f1e09bff-5ba3-4f0e-85fe-4629ff98a5e7", 00:12:50.800 "assigned_rate_limits": { 00:12:50.800 "rw_ios_per_sec": 0, 00:12:50.800 "rw_mbytes_per_sec": 0, 00:12:50.800 "r_mbytes_per_sec": 0, 00:12:50.800 "w_mbytes_per_sec": 0 00:12:50.800 }, 00:12:50.800 "claimed": true, 00:12:50.800 "claim_type": "exclusive_write", 00:12:50.800 "zoned": false, 00:12:50.800 "supported_io_types": { 00:12:50.800 "read": true, 00:12:50.800 "write": true, 00:12:50.800 "unmap": true, 00:12:50.800 "flush": true, 00:12:50.800 "reset": true, 00:12:50.800 "nvme_admin": false, 00:12:50.800 "nvme_io": false, 00:12:50.800 "nvme_io_md": false, 00:12:50.800 "write_zeroes": true, 00:12:50.800 "zcopy": true, 00:12:50.800 "get_zone_info": false, 00:12:50.800 "zone_management": false, 00:12:50.800 "zone_append": false, 00:12:50.800 "compare": false, 00:12:50.800 "compare_and_write": false, 00:12:50.800 "abort": true, 00:12:50.800 "seek_hole": false, 00:12:50.800 "seek_data": false, 00:12:50.800 "copy": true, 00:12:50.800 "nvme_iov_md": false 00:12:50.800 }, 00:12:50.800 "memory_domains": [ 00:12:50.800 { 00:12:50.800 "dma_device_id": "system", 00:12:50.800 "dma_device_type": 1 00:12:50.800 }, 00:12:50.800 { 00:12:50.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.800 "dma_device_type": 2 00:12:50.800 } 00:12:50.800 ], 00:12:50.800 "driver_specific": {} 00:12:50.800 } 00:12:50.800 ] 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.800 "name": "Existed_Raid", 00:12:50.800 "uuid": "f54aa04e-03f3-4286-93f9-66331e222bc9", 00:12:50.800 "strip_size_kb": 64, 00:12:50.800 "state": "online", 00:12:50.800 "raid_level": "raid0", 00:12:50.800 "superblock": true, 00:12:50.800 "num_base_bdevs": 2, 00:12:50.800 "num_base_bdevs_discovered": 2, 00:12:50.800 "num_base_bdevs_operational": 2, 00:12:50.800 "base_bdevs_list": [ 00:12:50.800 { 00:12:50.800 "name": "BaseBdev1", 00:12:50.800 "uuid": "39aeabe1-daca-44dc-b0e7-68708f1b45f8", 00:12:50.800 "is_configured": true, 00:12:50.800 "data_offset": 2048, 00:12:50.800 "data_size": 63488 00:12:50.800 }, 00:12:50.800 { 00:12:50.800 "name": "BaseBdev2", 00:12:50.800 "uuid": "f1e09bff-5ba3-4f0e-85fe-4629ff98a5e7", 00:12:50.800 "is_configured": true, 00:12:50.800 "data_offset": 2048, 00:12:50.800 "data_size": 63488 00:12:50.800 } 00:12:50.800 ] 00:12:50.800 }' 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.800 13:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:51.367 [2024-12-06 13:07:38.185549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.367 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:51.367 "name": "Existed_Raid", 00:12:51.367 "aliases": [ 00:12:51.368 "f54aa04e-03f3-4286-93f9-66331e222bc9" 00:12:51.368 ], 00:12:51.368 "product_name": "Raid Volume", 00:12:51.368 "block_size": 512, 00:12:51.368 "num_blocks": 126976, 00:12:51.368 "uuid": "f54aa04e-03f3-4286-93f9-66331e222bc9", 00:12:51.368 "assigned_rate_limits": { 00:12:51.368 "rw_ios_per_sec": 0, 00:12:51.368 "rw_mbytes_per_sec": 0, 00:12:51.368 "r_mbytes_per_sec": 0, 00:12:51.368 "w_mbytes_per_sec": 0 00:12:51.368 }, 00:12:51.368 "claimed": false, 00:12:51.368 "zoned": false, 00:12:51.368 "supported_io_types": { 00:12:51.368 "read": true, 00:12:51.368 "write": true, 00:12:51.368 "unmap": true, 00:12:51.368 "flush": true, 00:12:51.368 "reset": true, 00:12:51.368 "nvme_admin": false, 00:12:51.368 "nvme_io": false, 00:12:51.368 "nvme_io_md": false, 00:12:51.368 "write_zeroes": true, 00:12:51.368 "zcopy": false, 00:12:51.368 "get_zone_info": false, 00:12:51.368 "zone_management": false, 00:12:51.368 "zone_append": false, 00:12:51.368 "compare": false, 00:12:51.368 "compare_and_write": false, 00:12:51.368 "abort": false, 00:12:51.368 "seek_hole": false, 00:12:51.368 "seek_data": false, 00:12:51.368 "copy": false, 00:12:51.368 "nvme_iov_md": false 00:12:51.368 }, 00:12:51.368 "memory_domains": [ 00:12:51.368 { 00:12:51.368 "dma_device_id": "system", 00:12:51.368 "dma_device_type": 1 00:12:51.368 }, 00:12:51.368 { 00:12:51.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.368 "dma_device_type": 2 00:12:51.368 }, 00:12:51.368 { 00:12:51.368 "dma_device_id": "system", 00:12:51.368 "dma_device_type": 1 00:12:51.368 }, 00:12:51.368 { 00:12:51.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.368 "dma_device_type": 2 00:12:51.368 } 00:12:51.368 ], 00:12:51.368 "driver_specific": { 00:12:51.368 "raid": { 00:12:51.368 "uuid": "f54aa04e-03f3-4286-93f9-66331e222bc9", 00:12:51.368 "strip_size_kb": 64, 00:12:51.368 "state": "online", 00:12:51.368 "raid_level": "raid0", 00:12:51.368 "superblock": true, 00:12:51.368 "num_base_bdevs": 2, 00:12:51.368 "num_base_bdevs_discovered": 2, 00:12:51.368 "num_base_bdevs_operational": 2, 00:12:51.368 "base_bdevs_list": [ 00:12:51.368 { 00:12:51.368 "name": "BaseBdev1", 00:12:51.368 "uuid": "39aeabe1-daca-44dc-b0e7-68708f1b45f8", 00:12:51.368 "is_configured": true, 00:12:51.368 "data_offset": 2048, 00:12:51.368 "data_size": 63488 00:12:51.368 }, 00:12:51.368 { 00:12:51.368 "name": "BaseBdev2", 00:12:51.368 "uuid": "f1e09bff-5ba3-4f0e-85fe-4629ff98a5e7", 00:12:51.368 "is_configured": true, 00:12:51.368 "data_offset": 2048, 00:12:51.368 "data_size": 63488 00:12:51.368 } 00:12:51.368 ] 00:12:51.368 } 00:12:51.368 } 00:12:51.368 }' 00:12:51.368 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:51.368 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:51.368 BaseBdev2' 00:12:51.368 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.368 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:51.368 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.368 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:51.368 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.368 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.368 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.368 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.627 [2024-12-06 13:07:38.453375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.627 [2024-12-06 13:07:38.453437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.627 [2024-12-06 13:07:38.453532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.627 "name": "Existed_Raid", 00:12:51.627 "uuid": "f54aa04e-03f3-4286-93f9-66331e222bc9", 00:12:51.627 "strip_size_kb": 64, 00:12:51.627 "state": "offline", 00:12:51.627 "raid_level": "raid0", 00:12:51.627 "superblock": true, 00:12:51.627 "num_base_bdevs": 2, 00:12:51.627 "num_base_bdevs_discovered": 1, 00:12:51.627 "num_base_bdevs_operational": 1, 00:12:51.627 "base_bdevs_list": [ 00:12:51.627 { 00:12:51.627 "name": null, 00:12:51.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.627 "is_configured": false, 00:12:51.627 "data_offset": 0, 00:12:51.627 "data_size": 63488 00:12:51.627 }, 00:12:51.627 { 00:12:51.627 "name": "BaseBdev2", 00:12:51.627 "uuid": "f1e09bff-5ba3-4f0e-85fe-4629ff98a5e7", 00:12:51.627 "is_configured": true, 00:12:51.627 "data_offset": 2048, 00:12:51.627 "data_size": 63488 00:12:51.627 } 00:12:51.627 ] 00:12:51.627 }' 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.627 13:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.194 [2024-12-06 13:07:39.108069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.194 [2024-12-06 13:07:39.108176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:52.194 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.195 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.195 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.195 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61118 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61118 ']' 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61118 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61118 00:12:52.454 killing process with pid 61118 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61118' 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61118 00:12:52.454 [2024-12-06 13:07:39.290191] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.454 13:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61118 00:12:52.454 [2024-12-06 13:07:39.305713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.390 13:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:53.390 00:12:53.390 real 0m5.641s 00:12:53.390 user 0m8.462s 00:12:53.390 sys 0m0.854s 00:12:53.390 13:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.390 ************************************ 00:12:53.390 END TEST raid_state_function_test_sb 00:12:53.390 ************************************ 00:12:53.390 13:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.650 13:07:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:12:53.650 13:07:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:53.650 13:07:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.650 13:07:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.650 ************************************ 00:12:53.650 START TEST raid_superblock_test 00:12:53.650 ************************************ 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61381 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61381 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61381 ']' 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.650 13:07:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.650 [2024-12-06 13:07:40.577533] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:53.650 [2024-12-06 13:07:40.578008] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61381 ] 00:12:53.910 [2024-12-06 13:07:40.770086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.169 [2024-12-06 13:07:40.932358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.427 [2024-12-06 13:07:41.190016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.427 [2024-12-06 13:07:41.190326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.686 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.945 malloc1 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.945 [2024-12-06 13:07:41.734170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:54.945 [2024-12-06 13:07:41.734591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.945 [2024-12-06 13:07:41.734858] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:54.945 [2024-12-06 13:07:41.735037] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.945 [2024-12-06 13:07:41.738851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.945 [2024-12-06 13:07:41.738913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:54.945 pt1 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.945 malloc2 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.945 [2024-12-06 13:07:41.800007] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:54.945 [2024-12-06 13:07:41.800278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.945 [2024-12-06 13:07:41.800370] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:54.945 [2024-12-06 13:07:41.800593] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.945 [2024-12-06 13:07:41.803413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.945 [2024-12-06 13:07:41.803581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:54.945 pt2 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.945 [2024-12-06 13:07:41.808079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:54.945 [2024-12-06 13:07:41.810609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:54.945 [2024-12-06 13:07:41.810945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:54.945 [2024-12-06 13:07:41.810971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:54.945 [2024-12-06 13:07:41.811281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:54.945 [2024-12-06 13:07:41.811500] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:54.945 [2024-12-06 13:07:41.811521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:54.945 [2024-12-06 13:07:41.811713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.945 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.945 "name": "raid_bdev1", 00:12:54.945 "uuid": "bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e", 00:12:54.945 "strip_size_kb": 64, 00:12:54.945 "state": "online", 00:12:54.945 "raid_level": "raid0", 00:12:54.945 "superblock": true, 00:12:54.945 "num_base_bdevs": 2, 00:12:54.945 "num_base_bdevs_discovered": 2, 00:12:54.945 "num_base_bdevs_operational": 2, 00:12:54.945 "base_bdevs_list": [ 00:12:54.945 { 00:12:54.945 "name": "pt1", 00:12:54.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:54.945 "is_configured": true, 00:12:54.945 "data_offset": 2048, 00:12:54.945 "data_size": 63488 00:12:54.945 }, 00:12:54.945 { 00:12:54.945 "name": "pt2", 00:12:54.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.945 "is_configured": true, 00:12:54.945 "data_offset": 2048, 00:12:54.945 "data_size": 63488 00:12:54.945 } 00:12:54.945 ] 00:12:54.945 }' 00:12:54.946 13:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.946 13:07:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.513 [2024-12-06 13:07:42.324637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.513 "name": "raid_bdev1", 00:12:55.513 "aliases": [ 00:12:55.513 "bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e" 00:12:55.513 ], 00:12:55.513 "product_name": "Raid Volume", 00:12:55.513 "block_size": 512, 00:12:55.513 "num_blocks": 126976, 00:12:55.513 "uuid": "bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e", 00:12:55.513 "assigned_rate_limits": { 00:12:55.513 "rw_ios_per_sec": 0, 00:12:55.513 "rw_mbytes_per_sec": 0, 00:12:55.513 "r_mbytes_per_sec": 0, 00:12:55.513 "w_mbytes_per_sec": 0 00:12:55.513 }, 00:12:55.513 "claimed": false, 00:12:55.513 "zoned": false, 00:12:55.513 "supported_io_types": { 00:12:55.513 "read": true, 00:12:55.513 "write": true, 00:12:55.513 "unmap": true, 00:12:55.513 "flush": true, 00:12:55.513 "reset": true, 00:12:55.513 "nvme_admin": false, 00:12:55.513 "nvme_io": false, 00:12:55.513 "nvme_io_md": false, 00:12:55.513 "write_zeroes": true, 00:12:55.513 "zcopy": false, 00:12:55.513 "get_zone_info": false, 00:12:55.513 "zone_management": false, 00:12:55.513 "zone_append": false, 00:12:55.513 "compare": false, 00:12:55.513 "compare_and_write": false, 00:12:55.513 "abort": false, 00:12:55.513 "seek_hole": false, 00:12:55.513 "seek_data": false, 00:12:55.513 "copy": false, 00:12:55.513 "nvme_iov_md": false 00:12:55.513 }, 00:12:55.513 "memory_domains": [ 00:12:55.513 { 00:12:55.513 "dma_device_id": "system", 00:12:55.513 "dma_device_type": 1 00:12:55.513 }, 00:12:55.513 { 00:12:55.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.513 "dma_device_type": 2 00:12:55.513 }, 00:12:55.513 { 00:12:55.513 "dma_device_id": "system", 00:12:55.513 "dma_device_type": 1 00:12:55.513 }, 00:12:55.513 { 00:12:55.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.513 "dma_device_type": 2 00:12:55.513 } 00:12:55.513 ], 00:12:55.513 "driver_specific": { 00:12:55.513 "raid": { 00:12:55.513 "uuid": "bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e", 00:12:55.513 "strip_size_kb": 64, 00:12:55.513 "state": "online", 00:12:55.513 "raid_level": "raid0", 00:12:55.513 "superblock": true, 00:12:55.513 "num_base_bdevs": 2, 00:12:55.513 "num_base_bdevs_discovered": 2, 00:12:55.513 "num_base_bdevs_operational": 2, 00:12:55.513 "base_bdevs_list": [ 00:12:55.513 { 00:12:55.513 "name": "pt1", 00:12:55.513 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.513 "is_configured": true, 00:12:55.513 "data_offset": 2048, 00:12:55.513 "data_size": 63488 00:12:55.513 }, 00:12:55.513 { 00:12:55.513 "name": "pt2", 00:12:55.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.513 "is_configured": true, 00:12:55.513 "data_offset": 2048, 00:12:55.513 "data_size": 63488 00:12:55.513 } 00:12:55.513 ] 00:12:55.513 } 00:12:55.513 } 00:12:55.513 }' 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:55.513 pt2' 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.513 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:55.773 [2024-12-06 13:07:42.580621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e ']' 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.773 [2024-12-06 13:07:42.632264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.773 [2024-12-06 13:07:42.632312] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.773 [2024-12-06 13:07:42.632419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.773 [2024-12-06 13:07:42.632507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.773 [2024-12-06 13:07:42.632531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.773 [2024-12-06 13:07:42.776399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:55.773 [2024-12-06 13:07:42.779208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:55.773 [2024-12-06 13:07:42.779410] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:55.773 [2024-12-06 13:07:42.779642] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:55.773 [2024-12-06 13:07:42.779836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.773 [2024-12-06 13:07:42.780075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:55.773 request: 00:12:55.773 { 00:12:55.773 "name": "raid_bdev1", 00:12:55.773 "raid_level": "raid0", 00:12:55.773 "base_bdevs": [ 00:12:55.773 "malloc1", 00:12:55.773 "malloc2" 00:12:55.773 ], 00:12:55.773 "strip_size_kb": 64, 00:12:55.773 "superblock": false, 00:12:55.773 "method": "bdev_raid_create", 00:12:55.773 "req_id": 1 00:12:55.773 } 00:12:55.773 Got JSON-RPC error response 00:12:55.773 response: 00:12:55.773 { 00:12:55.773 "code": -17, 00:12:55.773 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:55.773 } 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:55.773 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.033 [2024-12-06 13:07:42.840431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:56.033 [2024-12-06 13:07:42.840761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.033 [2024-12-06 13:07:42.840800] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:56.033 [2024-12-06 13:07:42.840820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.033 [2024-12-06 13:07:42.843793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.033 [2024-12-06 13:07:42.843842] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:56.033 [2024-12-06 13:07:42.843970] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:56.033 [2024-12-06 13:07:42.844053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:56.033 pt1 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.033 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.034 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.034 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.034 "name": "raid_bdev1", 00:12:56.034 "uuid": "bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e", 00:12:56.034 "strip_size_kb": 64, 00:12:56.034 "state": "configuring", 00:12:56.034 "raid_level": "raid0", 00:12:56.034 "superblock": true, 00:12:56.034 "num_base_bdevs": 2, 00:12:56.034 "num_base_bdevs_discovered": 1, 00:12:56.034 "num_base_bdevs_operational": 2, 00:12:56.034 "base_bdevs_list": [ 00:12:56.034 { 00:12:56.034 "name": "pt1", 00:12:56.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.034 "is_configured": true, 00:12:56.034 "data_offset": 2048, 00:12:56.034 "data_size": 63488 00:12:56.034 }, 00:12:56.034 { 00:12:56.034 "name": null, 00:12:56.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.034 "is_configured": false, 00:12:56.034 "data_offset": 2048, 00:12:56.034 "data_size": 63488 00:12:56.034 } 00:12:56.034 ] 00:12:56.034 }' 00:12:56.034 13:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.034 13:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.601 [2024-12-06 13:07:43.380604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:56.601 [2024-12-06 13:07:43.380711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.601 [2024-12-06 13:07:43.380746] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:56.601 [2024-12-06 13:07:43.380765] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.601 [2024-12-06 13:07:43.381378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.601 [2024-12-06 13:07:43.381416] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:56.601 [2024-12-06 13:07:43.381538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:56.601 [2024-12-06 13:07:43.381581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.601 [2024-12-06 13:07:43.381739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:56.601 [2024-12-06 13:07:43.381762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:56.601 [2024-12-06 13:07:43.382065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:56.601 [2024-12-06 13:07:43.382264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:56.601 [2024-12-06 13:07:43.382279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:56.601 [2024-12-06 13:07:43.382449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.601 pt2 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.601 "name": "raid_bdev1", 00:12:56.601 "uuid": "bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e", 00:12:56.601 "strip_size_kb": 64, 00:12:56.601 "state": "online", 00:12:56.601 "raid_level": "raid0", 00:12:56.601 "superblock": true, 00:12:56.601 "num_base_bdevs": 2, 00:12:56.601 "num_base_bdevs_discovered": 2, 00:12:56.601 "num_base_bdevs_operational": 2, 00:12:56.601 "base_bdevs_list": [ 00:12:56.601 { 00:12:56.601 "name": "pt1", 00:12:56.601 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.601 "is_configured": true, 00:12:56.601 "data_offset": 2048, 00:12:56.601 "data_size": 63488 00:12:56.601 }, 00:12:56.601 { 00:12:56.601 "name": "pt2", 00:12:56.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.601 "is_configured": true, 00:12:56.601 "data_offset": 2048, 00:12:56.601 "data_size": 63488 00:12:56.601 } 00:12:56.601 ] 00:12:56.601 }' 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.601 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.168 [2024-12-06 13:07:43.925074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.168 "name": "raid_bdev1", 00:12:57.168 "aliases": [ 00:12:57.168 "bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e" 00:12:57.168 ], 00:12:57.168 "product_name": "Raid Volume", 00:12:57.168 "block_size": 512, 00:12:57.168 "num_blocks": 126976, 00:12:57.168 "uuid": "bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e", 00:12:57.168 "assigned_rate_limits": { 00:12:57.168 "rw_ios_per_sec": 0, 00:12:57.168 "rw_mbytes_per_sec": 0, 00:12:57.168 "r_mbytes_per_sec": 0, 00:12:57.168 "w_mbytes_per_sec": 0 00:12:57.168 }, 00:12:57.168 "claimed": false, 00:12:57.168 "zoned": false, 00:12:57.168 "supported_io_types": { 00:12:57.168 "read": true, 00:12:57.168 "write": true, 00:12:57.168 "unmap": true, 00:12:57.168 "flush": true, 00:12:57.168 "reset": true, 00:12:57.168 "nvme_admin": false, 00:12:57.168 "nvme_io": false, 00:12:57.168 "nvme_io_md": false, 00:12:57.168 "write_zeroes": true, 00:12:57.168 "zcopy": false, 00:12:57.168 "get_zone_info": false, 00:12:57.168 "zone_management": false, 00:12:57.168 "zone_append": false, 00:12:57.168 "compare": false, 00:12:57.168 "compare_and_write": false, 00:12:57.168 "abort": false, 00:12:57.168 "seek_hole": false, 00:12:57.168 "seek_data": false, 00:12:57.168 "copy": false, 00:12:57.168 "nvme_iov_md": false 00:12:57.168 }, 00:12:57.168 "memory_domains": [ 00:12:57.168 { 00:12:57.168 "dma_device_id": "system", 00:12:57.168 "dma_device_type": 1 00:12:57.168 }, 00:12:57.168 { 00:12:57.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.168 "dma_device_type": 2 00:12:57.168 }, 00:12:57.168 { 00:12:57.168 "dma_device_id": "system", 00:12:57.168 "dma_device_type": 1 00:12:57.168 }, 00:12:57.168 { 00:12:57.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.168 "dma_device_type": 2 00:12:57.168 } 00:12:57.168 ], 00:12:57.168 "driver_specific": { 00:12:57.168 "raid": { 00:12:57.168 "uuid": "bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e", 00:12:57.168 "strip_size_kb": 64, 00:12:57.168 "state": "online", 00:12:57.168 "raid_level": "raid0", 00:12:57.168 "superblock": true, 00:12:57.168 "num_base_bdevs": 2, 00:12:57.168 "num_base_bdevs_discovered": 2, 00:12:57.168 "num_base_bdevs_operational": 2, 00:12:57.168 "base_bdevs_list": [ 00:12:57.168 { 00:12:57.168 "name": "pt1", 00:12:57.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.168 "is_configured": true, 00:12:57.168 "data_offset": 2048, 00:12:57.168 "data_size": 63488 00:12:57.168 }, 00:12:57.168 { 00:12:57.168 "name": "pt2", 00:12:57.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.168 "is_configured": true, 00:12:57.168 "data_offset": 2048, 00:12:57.168 "data_size": 63488 00:12:57.168 } 00:12:57.168 ] 00:12:57.168 } 00:12:57.168 } 00:12:57.168 }' 00:12:57.168 13:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.168 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:57.168 pt2' 00:12:57.168 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.168 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.168 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.168 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:57.168 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.168 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.169 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.169 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.169 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.169 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.169 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.169 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.169 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:57.169 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.169 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.169 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.427 [2024-12-06 13:07:44.201145] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e '!=' bb8c85cb-a27e-4fb4-bb49-6b0fdc70674e ']' 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61381 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61381 ']' 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61381 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61381 00:12:57.427 killing process with pid 61381 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61381' 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61381 00:12:57.427 [2024-12-06 13:07:44.284572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:57.427 13:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61381 00:12:57.427 [2024-12-06 13:07:44.284708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.427 [2024-12-06 13:07:44.284780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.428 [2024-12-06 13:07:44.284806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:57.706 [2024-12-06 13:07:44.476983] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.640 ************************************ 00:12:58.640 END TEST raid_superblock_test 00:12:58.640 ************************************ 00:12:58.640 13:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:58.640 00:12:58.640 real 0m5.124s 00:12:58.640 user 0m7.535s 00:12:58.640 sys 0m0.798s 00:12:58.640 13:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.640 13:07:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.640 13:07:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:12:58.640 13:07:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:58.640 13:07:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.640 13:07:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.640 ************************************ 00:12:58.640 START TEST raid_read_error_test 00:12:58.640 ************************************ 00:12:58.640 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:12:58.640 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:58.640 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:58.640 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:58.640 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:58.640 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.640 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qzHRRG9ORA 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61598 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61598 00:12:58.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61598 ']' 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.641 13:07:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.899 [2024-12-06 13:07:45.752165] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:12:58.899 [2024-12-06 13:07:45.752360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61598 ] 00:12:59.157 [2024-12-06 13:07:45.950620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.157 [2024-12-06 13:07:46.137551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.415 [2024-12-06 13:07:46.403380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.415 [2024-12-06 13:07:46.403454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 BaseBdev1_malloc 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 true 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 [2024-12-06 13:07:46.897823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:59.980 [2024-12-06 13:07:46.897929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.980 [2024-12-06 13:07:46.897987] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:59.980 [2024-12-06 13:07:46.898012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.980 [2024-12-06 13:07:46.901322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.980 [2024-12-06 13:07:46.901376] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:59.980 BaseBdev1 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 BaseBdev2_malloc 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 true 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 [2024-12-06 13:07:46.962544] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:59.980 [2024-12-06 13:07:46.962633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.980 [2024-12-06 13:07:46.962661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:59.980 [2024-12-06 13:07:46.962680] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.980 [2024-12-06 13:07:46.965737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.980 [2024-12-06 13:07:46.965791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:59.980 BaseBdev2 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.980 [2024-12-06 13:07:46.970652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.980 [2024-12-06 13:07:46.973379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.980 [2024-12-06 13:07:46.973851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:59.980 [2024-12-06 13:07:46.973886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:59.980 [2024-12-06 13:07:46.974202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:59.980 [2024-12-06 13:07:46.974516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:59.980 [2024-12-06 13:07:46.974545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:59.980 [2024-12-06 13:07:46.974876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.980 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.981 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.239 13:07:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.239 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.239 "name": "raid_bdev1", 00:13:00.239 "uuid": "139ede8c-bb69-4890-b8fc-0252513a2d3a", 00:13:00.239 "strip_size_kb": 64, 00:13:00.239 "state": "online", 00:13:00.239 "raid_level": "raid0", 00:13:00.239 "superblock": true, 00:13:00.239 "num_base_bdevs": 2, 00:13:00.239 "num_base_bdevs_discovered": 2, 00:13:00.239 "num_base_bdevs_operational": 2, 00:13:00.239 "base_bdevs_list": [ 00:13:00.239 { 00:13:00.239 "name": "BaseBdev1", 00:13:00.239 "uuid": "d6a29511-4a60-556f-b8ee-c21ba7242231", 00:13:00.239 "is_configured": true, 00:13:00.239 "data_offset": 2048, 00:13:00.239 "data_size": 63488 00:13:00.239 }, 00:13:00.239 { 00:13:00.239 "name": "BaseBdev2", 00:13:00.239 "uuid": "f88e695c-35d9-59c8-aa7f-3191b2abfadd", 00:13:00.239 "is_configured": true, 00:13:00.239 "data_offset": 2048, 00:13:00.239 "data_size": 63488 00:13:00.239 } 00:13:00.239 ] 00:13:00.239 }' 00:13:00.239 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.239 13:07:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.804 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:00.804 13:07:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:00.804 [2024-12-06 13:07:47.664633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.739 "name": "raid_bdev1", 00:13:01.739 "uuid": "139ede8c-bb69-4890-b8fc-0252513a2d3a", 00:13:01.739 "strip_size_kb": 64, 00:13:01.739 "state": "online", 00:13:01.739 "raid_level": "raid0", 00:13:01.739 "superblock": true, 00:13:01.739 "num_base_bdevs": 2, 00:13:01.739 "num_base_bdevs_discovered": 2, 00:13:01.739 "num_base_bdevs_operational": 2, 00:13:01.739 "base_bdevs_list": [ 00:13:01.739 { 00:13:01.739 "name": "BaseBdev1", 00:13:01.739 "uuid": "d6a29511-4a60-556f-b8ee-c21ba7242231", 00:13:01.739 "is_configured": true, 00:13:01.739 "data_offset": 2048, 00:13:01.739 "data_size": 63488 00:13:01.739 }, 00:13:01.739 { 00:13:01.739 "name": "BaseBdev2", 00:13:01.739 "uuid": "f88e695c-35d9-59c8-aa7f-3191b2abfadd", 00:13:01.739 "is_configured": true, 00:13:01.739 "data_offset": 2048, 00:13:01.739 "data_size": 63488 00:13:01.739 } 00:13:01.739 ] 00:13:01.739 }' 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.739 13:07:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.070 13:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:02.070 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.070 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.070 [2024-12-06 13:07:49.071579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.070 [2024-12-06 13:07:49.071908] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.070 { 00:13:02.070 "results": [ 00:13:02.070 { 00:13:02.070 "job": "raid_bdev1", 00:13:02.070 "core_mask": "0x1", 00:13:02.070 "workload": "randrw", 00:13:02.070 "percentage": 50, 00:13:02.070 "status": "finished", 00:13:02.070 "queue_depth": 1, 00:13:02.070 "io_size": 131072, 00:13:02.070 "runtime": 1.40473, 00:13:02.070 "iops": 8889.964619535427, 00:13:02.070 "mibps": 1111.2455774419284, 00:13:02.070 "io_failed": 1, 00:13:02.070 "io_timeout": 0, 00:13:02.070 "avg_latency_us": 156.96102606657496, 00:13:02.070 "min_latency_us": 43.054545454545455, 00:13:02.070 "max_latency_us": 1899.0545454545454 00:13:02.070 } 00:13:02.070 ], 00:13:02.070 "core_count": 1 00:13:02.070 } 00:13:02.070 [2024-12-06 13:07:49.075697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.070 [2024-12-06 13:07:49.075829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.070 [2024-12-06 13:07:49.075887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.070 [2024-12-06 13:07:49.075907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:02.070 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.070 13:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61598 00:13:02.070 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61598 ']' 00:13:02.070 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61598 00:13:02.070 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:02.327 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.327 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61598 00:13:02.327 killing process with pid 61598 00:13:02.327 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.327 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.327 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61598' 00:13:02.327 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61598 00:13:02.327 [2024-12-06 13:07:49.117523] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.327 13:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61598 00:13:02.327 [2024-12-06 13:07:49.249683] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.700 13:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qzHRRG9ORA 00:13:03.700 13:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:03.700 13:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:03.700 13:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:03.700 13:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:03.700 13:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:03.700 13:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:03.700 ************************************ 00:13:03.700 END TEST raid_read_error_test 00:13:03.700 ************************************ 00:13:03.700 13:07:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:03.700 00:13:03.700 real 0m4.835s 00:13:03.700 user 0m6.060s 00:13:03.700 sys 0m0.659s 00:13:03.700 13:07:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.700 13:07:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.700 13:07:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:13:03.700 13:07:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:03.700 13:07:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.700 13:07:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.700 ************************************ 00:13:03.700 START TEST raid_write_error_test 00:13:03.700 ************************************ 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BMtGgs0llZ 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61738 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61738 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61738 ']' 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.700 13:07:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.700 [2024-12-06 13:07:50.649175] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:03.700 [2024-12-06 13:07:50.650866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61738 ] 00:13:03.957 [2024-12-06 13:07:50.858985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.214 [2024-12-06 13:07:51.012409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.472 [2024-12-06 13:07:51.236407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.472 [2024-12-06 13:07:51.236502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.738 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.738 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:04.738 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.738 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.738 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.739 BaseBdev1_malloc 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.739 true 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.739 [2024-12-06 13:07:51.722611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:04.739 [2024-12-06 13:07:51.722811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.739 [2024-12-06 13:07:51.722861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:04.739 [2024-12-06 13:07:51.722880] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.739 [2024-12-06 13:07:51.725670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.739 [2024-12-06 13:07:51.725727] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.739 BaseBdev1 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.739 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.997 BaseBdev2_malloc 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.997 true 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.997 [2024-12-06 13:07:51.778281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:04.997 [2024-12-06 13:07:51.778350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.997 [2024-12-06 13:07:51.778388] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:04.997 [2024-12-06 13:07:51.778405] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.997 [2024-12-06 13:07:51.781158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.997 [2024-12-06 13:07:51.781347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:04.997 BaseBdev2 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.997 [2024-12-06 13:07:51.786354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.997 [2024-12-06 13:07:51.788825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.997 [2024-12-06 13:07:51.789078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:04.997 [2024-12-06 13:07:51.789105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:04.997 [2024-12-06 13:07:51.789400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:04.997 [2024-12-06 13:07:51.789644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:04.997 [2024-12-06 13:07:51.789667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:04.997 [2024-12-06 13:07:51.789858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.997 "name": "raid_bdev1", 00:13:04.997 "uuid": "0839db00-8376-4037-9d38-83f767f747e8", 00:13:04.997 "strip_size_kb": 64, 00:13:04.997 "state": "online", 00:13:04.997 "raid_level": "raid0", 00:13:04.997 "superblock": true, 00:13:04.997 "num_base_bdevs": 2, 00:13:04.997 "num_base_bdevs_discovered": 2, 00:13:04.997 "num_base_bdevs_operational": 2, 00:13:04.997 "base_bdevs_list": [ 00:13:04.997 { 00:13:04.997 "name": "BaseBdev1", 00:13:04.997 "uuid": "9ae1f6b8-d833-5401-b1f1-1a4c0cfb2c9a", 00:13:04.997 "is_configured": true, 00:13:04.997 "data_offset": 2048, 00:13:04.997 "data_size": 63488 00:13:04.997 }, 00:13:04.997 { 00:13:04.997 "name": "BaseBdev2", 00:13:04.997 "uuid": "da9ce15e-28bd-55ff-8814-85d5c1ca212f", 00:13:04.997 "is_configured": true, 00:13:04.997 "data_offset": 2048, 00:13:04.997 "data_size": 63488 00:13:04.997 } 00:13:04.997 ] 00:13:04.997 }' 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.997 13:07:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.563 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:05.563 13:07:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:05.563 [2024-12-06 13:07:52.443961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.512 "name": "raid_bdev1", 00:13:06.512 "uuid": "0839db00-8376-4037-9d38-83f767f747e8", 00:13:06.512 "strip_size_kb": 64, 00:13:06.512 "state": "online", 00:13:06.512 "raid_level": "raid0", 00:13:06.512 "superblock": true, 00:13:06.512 "num_base_bdevs": 2, 00:13:06.512 "num_base_bdevs_discovered": 2, 00:13:06.512 "num_base_bdevs_operational": 2, 00:13:06.512 "base_bdevs_list": [ 00:13:06.512 { 00:13:06.512 "name": "BaseBdev1", 00:13:06.512 "uuid": "9ae1f6b8-d833-5401-b1f1-1a4c0cfb2c9a", 00:13:06.512 "is_configured": true, 00:13:06.512 "data_offset": 2048, 00:13:06.512 "data_size": 63488 00:13:06.512 }, 00:13:06.512 { 00:13:06.512 "name": "BaseBdev2", 00:13:06.512 "uuid": "da9ce15e-28bd-55ff-8814-85d5c1ca212f", 00:13:06.512 "is_configured": true, 00:13:06.512 "data_offset": 2048, 00:13:06.512 "data_size": 63488 00:13:06.512 } 00:13:06.512 ] 00:13:06.512 }' 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.512 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.079 [2024-12-06 13:07:53.889183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.079 [2024-12-06 13:07:53.889234] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.079 [2024-12-06 13:07:53.893341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.079 [2024-12-06 13:07:53.893693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.079 [2024-12-06 13:07:53.893905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.079 [2024-12-06 13:07:53.894090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:07.079 { 00:13:07.079 "results": [ 00:13:07.079 { 00:13:07.079 "job": "raid_bdev1", 00:13:07.079 "core_mask": "0x1", 00:13:07.079 "workload": "randrw", 00:13:07.079 "percentage": 50, 00:13:07.079 "status": "finished", 00:13:07.079 "queue_depth": 1, 00:13:07.079 "io_size": 131072, 00:13:07.079 "runtime": 1.442905, 00:13:07.079 "iops": 10128.178916837907, 00:13:07.079 "mibps": 1266.0223646047384, 00:13:07.079 "io_failed": 1, 00:13:07.079 "io_timeout": 0, 00:13:07.079 "avg_latency_us": 137.40042223120702, 00:13:07.079 "min_latency_us": 43.28727272727273, 00:13:07.079 "max_latency_us": 1854.370909090909 00:13:07.079 } 00:13:07.079 ], 00:13:07.079 "core_count": 1 00:13:07.079 } 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61738 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61738 ']' 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61738 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61738 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61738' 00:13:07.079 killing process with pid 61738 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61738 00:13:07.079 [2024-12-06 13:07:53.929076] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.079 13:07:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61738 00:13:07.079 [2024-12-06 13:07:54.076523] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.452 13:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BMtGgs0llZ 00:13:08.452 13:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:08.452 13:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:08.452 13:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:13:08.452 13:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:08.452 13:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.452 13:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:08.452 13:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:13:08.452 00:13:08.452 real 0m4.782s 00:13:08.452 user 0m5.998s 00:13:08.452 sys 0m0.597s 00:13:08.452 ************************************ 00:13:08.452 END TEST raid_write_error_test 00:13:08.452 ************************************ 00:13:08.452 13:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.452 13:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.452 13:07:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:08.452 13:07:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:13:08.452 13:07:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:08.452 13:07:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.452 13:07:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.452 ************************************ 00:13:08.452 START TEST raid_state_function_test 00:13:08.452 ************************************ 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:08.452 Process raid pid: 61887 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61887 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61887' 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61887 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61887 ']' 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.452 13:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.452 [2024-12-06 13:07:55.446554] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:08.452 [2024-12-06 13:07:55.447027] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.710 [2024-12-06 13:07:55.630001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.968 [2024-12-06 13:07:55.763332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.968 [2024-12-06 13:07:55.974973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.968 [2024-12-06 13:07:55.975260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.535 [2024-12-06 13:07:56.431602] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.535 [2024-12-06 13:07:56.431678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.535 [2024-12-06 13:07:56.431695] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.535 [2024-12-06 13:07:56.431711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.535 "name": "Existed_Raid", 00:13:09.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.535 "strip_size_kb": 64, 00:13:09.535 "state": "configuring", 00:13:09.535 "raid_level": "concat", 00:13:09.535 "superblock": false, 00:13:09.535 "num_base_bdevs": 2, 00:13:09.535 "num_base_bdevs_discovered": 0, 00:13:09.535 "num_base_bdevs_operational": 2, 00:13:09.535 "base_bdevs_list": [ 00:13:09.535 { 00:13:09.535 "name": "BaseBdev1", 00:13:09.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.535 "is_configured": false, 00:13:09.535 "data_offset": 0, 00:13:09.535 "data_size": 0 00:13:09.535 }, 00:13:09.535 { 00:13:09.535 "name": "BaseBdev2", 00:13:09.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.535 "is_configured": false, 00:13:09.535 "data_offset": 0, 00:13:09.535 "data_size": 0 00:13:09.535 } 00:13:09.535 ] 00:13:09.535 }' 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.535 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.102 [2024-12-06 13:07:56.943729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.102 [2024-12-06 13:07:56.943774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.102 [2024-12-06 13:07:56.951688] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:10.102 [2024-12-06 13:07:56.951745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:10.102 [2024-12-06 13:07:56.951762] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.102 [2024-12-06 13:07:56.951781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.102 [2024-12-06 13:07:56.996690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.102 BaseBdev1 00:13:10.102 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.103 13:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:10.103 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:10.103 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:10.103 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:10.103 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:10.103 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:10.103 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:10.103 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.103 13:07:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.103 [ 00:13:10.103 { 00:13:10.103 "name": "BaseBdev1", 00:13:10.103 "aliases": [ 00:13:10.103 "5ffa3a7a-48c3-42d6-87c3-dc58778c26d9" 00:13:10.103 ], 00:13:10.103 "product_name": "Malloc disk", 00:13:10.103 "block_size": 512, 00:13:10.103 "num_blocks": 65536, 00:13:10.103 "uuid": "5ffa3a7a-48c3-42d6-87c3-dc58778c26d9", 00:13:10.103 "assigned_rate_limits": { 00:13:10.103 "rw_ios_per_sec": 0, 00:13:10.103 "rw_mbytes_per_sec": 0, 00:13:10.103 "r_mbytes_per_sec": 0, 00:13:10.103 "w_mbytes_per_sec": 0 00:13:10.103 }, 00:13:10.103 "claimed": true, 00:13:10.103 "claim_type": "exclusive_write", 00:13:10.103 "zoned": false, 00:13:10.103 "supported_io_types": { 00:13:10.103 "read": true, 00:13:10.103 "write": true, 00:13:10.103 "unmap": true, 00:13:10.103 "flush": true, 00:13:10.103 "reset": true, 00:13:10.103 "nvme_admin": false, 00:13:10.103 "nvme_io": false, 00:13:10.103 "nvme_io_md": false, 00:13:10.103 "write_zeroes": true, 00:13:10.103 "zcopy": true, 00:13:10.103 "get_zone_info": false, 00:13:10.103 "zone_management": false, 00:13:10.103 "zone_append": false, 00:13:10.103 "compare": false, 00:13:10.103 "compare_and_write": false, 00:13:10.103 "abort": true, 00:13:10.103 "seek_hole": false, 00:13:10.103 "seek_data": false, 00:13:10.103 "copy": true, 00:13:10.103 "nvme_iov_md": false 00:13:10.103 }, 00:13:10.103 "memory_domains": [ 00:13:10.103 { 00:13:10.103 "dma_device_id": "system", 00:13:10.103 "dma_device_type": 1 00:13:10.103 }, 00:13:10.103 { 00:13:10.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.103 "dma_device_type": 2 00:13:10.103 } 00:13:10.103 ], 00:13:10.103 "driver_specific": {} 00:13:10.103 } 00:13:10.103 ] 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.103 "name": "Existed_Raid", 00:13:10.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.103 "strip_size_kb": 64, 00:13:10.103 "state": "configuring", 00:13:10.103 "raid_level": "concat", 00:13:10.103 "superblock": false, 00:13:10.103 "num_base_bdevs": 2, 00:13:10.103 "num_base_bdevs_discovered": 1, 00:13:10.103 "num_base_bdevs_operational": 2, 00:13:10.103 "base_bdevs_list": [ 00:13:10.103 { 00:13:10.103 "name": "BaseBdev1", 00:13:10.103 "uuid": "5ffa3a7a-48c3-42d6-87c3-dc58778c26d9", 00:13:10.103 "is_configured": true, 00:13:10.103 "data_offset": 0, 00:13:10.103 "data_size": 65536 00:13:10.103 }, 00:13:10.103 { 00:13:10.103 "name": "BaseBdev2", 00:13:10.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.103 "is_configured": false, 00:13:10.103 "data_offset": 0, 00:13:10.103 "data_size": 0 00:13:10.103 } 00:13:10.103 ] 00:13:10.103 }' 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.103 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.670 [2024-12-06 13:07:57.520880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.670 [2024-12-06 13:07:57.520941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.670 [2024-12-06 13:07:57.532923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.670 [2024-12-06 13:07:57.535505] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.670 [2024-12-06 13:07:57.535686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.670 "name": "Existed_Raid", 00:13:10.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.670 "strip_size_kb": 64, 00:13:10.670 "state": "configuring", 00:13:10.670 "raid_level": "concat", 00:13:10.670 "superblock": false, 00:13:10.670 "num_base_bdevs": 2, 00:13:10.670 "num_base_bdevs_discovered": 1, 00:13:10.670 "num_base_bdevs_operational": 2, 00:13:10.670 "base_bdevs_list": [ 00:13:10.670 { 00:13:10.670 "name": "BaseBdev1", 00:13:10.670 "uuid": "5ffa3a7a-48c3-42d6-87c3-dc58778c26d9", 00:13:10.670 "is_configured": true, 00:13:10.670 "data_offset": 0, 00:13:10.670 "data_size": 65536 00:13:10.670 }, 00:13:10.670 { 00:13:10.670 "name": "BaseBdev2", 00:13:10.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.670 "is_configured": false, 00:13:10.670 "data_offset": 0, 00:13:10.670 "data_size": 0 00:13:10.670 } 00:13:10.670 ] 00:13:10.670 }' 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.670 13:07:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.237 [2024-12-06 13:07:58.099061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.237 [2024-12-06 13:07:58.099131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:11.237 [2024-12-06 13:07:58.099145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:11.237 [2024-12-06 13:07:58.099502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:11.237 [2024-12-06 13:07:58.099724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:11.237 [2024-12-06 13:07:58.099745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:11.237 [2024-12-06 13:07:58.100085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.237 BaseBdev2 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.237 [ 00:13:11.237 { 00:13:11.237 "name": "BaseBdev2", 00:13:11.237 "aliases": [ 00:13:11.237 "fa648664-2f14-4594-8b26-fc70c776cc48" 00:13:11.237 ], 00:13:11.237 "product_name": "Malloc disk", 00:13:11.237 "block_size": 512, 00:13:11.237 "num_blocks": 65536, 00:13:11.237 "uuid": "fa648664-2f14-4594-8b26-fc70c776cc48", 00:13:11.237 "assigned_rate_limits": { 00:13:11.237 "rw_ios_per_sec": 0, 00:13:11.237 "rw_mbytes_per_sec": 0, 00:13:11.237 "r_mbytes_per_sec": 0, 00:13:11.237 "w_mbytes_per_sec": 0 00:13:11.237 }, 00:13:11.237 "claimed": true, 00:13:11.237 "claim_type": "exclusive_write", 00:13:11.237 "zoned": false, 00:13:11.237 "supported_io_types": { 00:13:11.237 "read": true, 00:13:11.237 "write": true, 00:13:11.237 "unmap": true, 00:13:11.237 "flush": true, 00:13:11.237 "reset": true, 00:13:11.237 "nvme_admin": false, 00:13:11.237 "nvme_io": false, 00:13:11.237 "nvme_io_md": false, 00:13:11.237 "write_zeroes": true, 00:13:11.237 "zcopy": true, 00:13:11.237 "get_zone_info": false, 00:13:11.237 "zone_management": false, 00:13:11.237 "zone_append": false, 00:13:11.237 "compare": false, 00:13:11.237 "compare_and_write": false, 00:13:11.237 "abort": true, 00:13:11.237 "seek_hole": false, 00:13:11.237 "seek_data": false, 00:13:11.237 "copy": true, 00:13:11.237 "nvme_iov_md": false 00:13:11.237 }, 00:13:11.237 "memory_domains": [ 00:13:11.237 { 00:13:11.237 "dma_device_id": "system", 00:13:11.237 "dma_device_type": 1 00:13:11.237 }, 00:13:11.237 { 00:13:11.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.237 "dma_device_type": 2 00:13:11.237 } 00:13:11.237 ], 00:13:11.237 "driver_specific": {} 00:13:11.237 } 00:13:11.237 ] 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.237 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.238 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.238 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.238 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.238 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.238 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.238 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.238 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.238 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.238 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.238 "name": "Existed_Raid", 00:13:11.238 "uuid": "c258cb22-6a2a-4bb2-aab0-e485f46a7519", 00:13:11.238 "strip_size_kb": 64, 00:13:11.238 "state": "online", 00:13:11.238 "raid_level": "concat", 00:13:11.238 "superblock": false, 00:13:11.238 "num_base_bdevs": 2, 00:13:11.238 "num_base_bdevs_discovered": 2, 00:13:11.238 "num_base_bdevs_operational": 2, 00:13:11.238 "base_bdevs_list": [ 00:13:11.238 { 00:13:11.238 "name": "BaseBdev1", 00:13:11.238 "uuid": "5ffa3a7a-48c3-42d6-87c3-dc58778c26d9", 00:13:11.238 "is_configured": true, 00:13:11.238 "data_offset": 0, 00:13:11.238 "data_size": 65536 00:13:11.238 }, 00:13:11.238 { 00:13:11.238 "name": "BaseBdev2", 00:13:11.238 "uuid": "fa648664-2f14-4594-8b26-fc70c776cc48", 00:13:11.238 "is_configured": true, 00:13:11.238 "data_offset": 0, 00:13:11.238 "data_size": 65536 00:13:11.238 } 00:13:11.238 ] 00:13:11.238 }' 00:13:11.238 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.238 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.806 [2024-12-06 13:07:58.647625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.806 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:11.806 "name": "Existed_Raid", 00:13:11.806 "aliases": [ 00:13:11.806 "c258cb22-6a2a-4bb2-aab0-e485f46a7519" 00:13:11.806 ], 00:13:11.806 "product_name": "Raid Volume", 00:13:11.806 "block_size": 512, 00:13:11.806 "num_blocks": 131072, 00:13:11.806 "uuid": "c258cb22-6a2a-4bb2-aab0-e485f46a7519", 00:13:11.807 "assigned_rate_limits": { 00:13:11.807 "rw_ios_per_sec": 0, 00:13:11.807 "rw_mbytes_per_sec": 0, 00:13:11.807 "r_mbytes_per_sec": 0, 00:13:11.807 "w_mbytes_per_sec": 0 00:13:11.807 }, 00:13:11.807 "claimed": false, 00:13:11.807 "zoned": false, 00:13:11.807 "supported_io_types": { 00:13:11.807 "read": true, 00:13:11.807 "write": true, 00:13:11.807 "unmap": true, 00:13:11.807 "flush": true, 00:13:11.807 "reset": true, 00:13:11.807 "nvme_admin": false, 00:13:11.807 "nvme_io": false, 00:13:11.807 "nvme_io_md": false, 00:13:11.807 "write_zeroes": true, 00:13:11.807 "zcopy": false, 00:13:11.807 "get_zone_info": false, 00:13:11.807 "zone_management": false, 00:13:11.807 "zone_append": false, 00:13:11.807 "compare": false, 00:13:11.807 "compare_and_write": false, 00:13:11.807 "abort": false, 00:13:11.807 "seek_hole": false, 00:13:11.807 "seek_data": false, 00:13:11.807 "copy": false, 00:13:11.807 "nvme_iov_md": false 00:13:11.807 }, 00:13:11.807 "memory_domains": [ 00:13:11.807 { 00:13:11.807 "dma_device_id": "system", 00:13:11.807 "dma_device_type": 1 00:13:11.807 }, 00:13:11.807 { 00:13:11.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.807 "dma_device_type": 2 00:13:11.807 }, 00:13:11.807 { 00:13:11.807 "dma_device_id": "system", 00:13:11.807 "dma_device_type": 1 00:13:11.807 }, 00:13:11.807 { 00:13:11.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.807 "dma_device_type": 2 00:13:11.807 } 00:13:11.807 ], 00:13:11.807 "driver_specific": { 00:13:11.807 "raid": { 00:13:11.807 "uuid": "c258cb22-6a2a-4bb2-aab0-e485f46a7519", 00:13:11.807 "strip_size_kb": 64, 00:13:11.807 "state": "online", 00:13:11.807 "raid_level": "concat", 00:13:11.807 "superblock": false, 00:13:11.807 "num_base_bdevs": 2, 00:13:11.807 "num_base_bdevs_discovered": 2, 00:13:11.807 "num_base_bdevs_operational": 2, 00:13:11.807 "base_bdevs_list": [ 00:13:11.807 { 00:13:11.807 "name": "BaseBdev1", 00:13:11.807 "uuid": "5ffa3a7a-48c3-42d6-87c3-dc58778c26d9", 00:13:11.807 "is_configured": true, 00:13:11.807 "data_offset": 0, 00:13:11.807 "data_size": 65536 00:13:11.807 }, 00:13:11.807 { 00:13:11.807 "name": "BaseBdev2", 00:13:11.807 "uuid": "fa648664-2f14-4594-8b26-fc70c776cc48", 00:13:11.807 "is_configured": true, 00:13:11.807 "data_offset": 0, 00:13:11.807 "data_size": 65536 00:13:11.807 } 00:13:11.807 ] 00:13:11.807 } 00:13:11.807 } 00:13:11.807 }' 00:13:11.807 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:11.807 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:11.807 BaseBdev2' 00:13:11.807 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.807 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:11.807 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.807 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.807 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:11.807 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.807 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.066 13:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.066 [2024-12-06 13:07:58.931407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:12.066 [2024-12-06 13:07:58.931454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.066 [2024-12-06 13:07:58.931538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.066 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.066 "name": "Existed_Raid", 00:13:12.066 "uuid": "c258cb22-6a2a-4bb2-aab0-e485f46a7519", 00:13:12.066 "strip_size_kb": 64, 00:13:12.066 "state": "offline", 00:13:12.066 "raid_level": "concat", 00:13:12.066 "superblock": false, 00:13:12.066 "num_base_bdevs": 2, 00:13:12.066 "num_base_bdevs_discovered": 1, 00:13:12.066 "num_base_bdevs_operational": 1, 00:13:12.066 "base_bdevs_list": [ 00:13:12.066 { 00:13:12.066 "name": null, 00:13:12.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.067 "is_configured": false, 00:13:12.067 "data_offset": 0, 00:13:12.067 "data_size": 65536 00:13:12.067 }, 00:13:12.067 { 00:13:12.067 "name": "BaseBdev2", 00:13:12.067 "uuid": "fa648664-2f14-4594-8b26-fc70c776cc48", 00:13:12.067 "is_configured": true, 00:13:12.067 "data_offset": 0, 00:13:12.067 "data_size": 65536 00:13:12.067 } 00:13:12.067 ] 00:13:12.067 }' 00:13:12.067 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.067 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.634 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.634 [2024-12-06 13:07:59.585764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:12.634 [2024-12-06 13:07:59.585836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61887 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61887 ']' 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61887 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61887 00:13:12.892 killing process with pid 61887 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61887' 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61887 00:13:12.892 [2024-12-06 13:07:59.762952] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.892 13:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61887 00:13:12.893 [2024-12-06 13:07:59.777737] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.826 13:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:13.826 00:13:13.826 real 0m5.497s 00:13:13.826 user 0m8.214s 00:13:13.826 sys 0m0.845s 00:13:13.826 13:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.826 13:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.826 ************************************ 00:13:13.826 END TEST raid_state_function_test 00:13:13.826 ************************************ 00:13:14.084 13:08:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:13:14.084 13:08:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:14.084 13:08:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.084 13:08:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.084 ************************************ 00:13:14.084 START TEST raid_state_function_test_sb 00:13:14.084 ************************************ 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:14.084 Process raid pid: 62146 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62146 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62146' 00:13:14.084 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62146 00:13:14.085 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62146 ']' 00:13:14.085 13:08:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:14.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.085 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.085 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.085 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.085 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.085 13:08:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.085 [2024-12-06 13:08:01.045561] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:14.085 [2024-12-06 13:08:01.046077] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.343 [2024-12-06 13:08:01.243642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.614 [2024-12-06 13:08:01.422710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.870 [2024-12-06 13:08:01.665180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.871 [2024-12-06 13:08:01.665245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.127 [2024-12-06 13:08:02.102736] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:15.127 [2024-12-06 13:08:02.102840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:15.127 [2024-12-06 13:08:02.102865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.127 [2024-12-06 13:08:02.102889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.127 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.128 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.128 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.128 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.128 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.128 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.384 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.384 "name": "Existed_Raid", 00:13:15.384 "uuid": "4cd1a3b0-44c1-4d58-877b-1a2e3b353e9e", 00:13:15.384 "strip_size_kb": 64, 00:13:15.384 "state": "configuring", 00:13:15.384 "raid_level": "concat", 00:13:15.384 "superblock": true, 00:13:15.384 "num_base_bdevs": 2, 00:13:15.384 "num_base_bdevs_discovered": 0, 00:13:15.384 "num_base_bdevs_operational": 2, 00:13:15.384 "base_bdevs_list": [ 00:13:15.384 { 00:13:15.384 "name": "BaseBdev1", 00:13:15.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.384 "is_configured": false, 00:13:15.384 "data_offset": 0, 00:13:15.384 "data_size": 0 00:13:15.384 }, 00:13:15.384 { 00:13:15.384 "name": "BaseBdev2", 00:13:15.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.384 "is_configured": false, 00:13:15.384 "data_offset": 0, 00:13:15.384 "data_size": 0 00:13:15.384 } 00:13:15.384 ] 00:13:15.384 }' 00:13:15.384 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.384 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.641 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:15.641 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.641 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.641 [2024-12-06 13:08:02.650806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:15.641 [2024-12-06 13:08:02.651192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:15.642 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.642 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:15.642 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.642 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.900 [2024-12-06 13:08:02.658835] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:15.900 [2024-12-06 13:08:02.659072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:15.900 [2024-12-06 13:08:02.659109] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.900 [2024-12-06 13:08:02.659147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.900 [2024-12-06 13:08:02.713043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.900 BaseBdev1 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.900 [ 00:13:15.900 { 00:13:15.900 "name": "BaseBdev1", 00:13:15.900 "aliases": [ 00:13:15.900 "2f515730-d864-4cfe-8d58-3cae929d7ff1" 00:13:15.900 ], 00:13:15.900 "product_name": "Malloc disk", 00:13:15.900 "block_size": 512, 00:13:15.900 "num_blocks": 65536, 00:13:15.900 "uuid": "2f515730-d864-4cfe-8d58-3cae929d7ff1", 00:13:15.900 "assigned_rate_limits": { 00:13:15.900 "rw_ios_per_sec": 0, 00:13:15.900 "rw_mbytes_per_sec": 0, 00:13:15.900 "r_mbytes_per_sec": 0, 00:13:15.900 "w_mbytes_per_sec": 0 00:13:15.900 }, 00:13:15.900 "claimed": true, 00:13:15.900 "claim_type": "exclusive_write", 00:13:15.900 "zoned": false, 00:13:15.900 "supported_io_types": { 00:13:15.900 "read": true, 00:13:15.900 "write": true, 00:13:15.900 "unmap": true, 00:13:15.900 "flush": true, 00:13:15.900 "reset": true, 00:13:15.900 "nvme_admin": false, 00:13:15.900 "nvme_io": false, 00:13:15.900 "nvme_io_md": false, 00:13:15.900 "write_zeroes": true, 00:13:15.900 "zcopy": true, 00:13:15.900 "get_zone_info": false, 00:13:15.900 "zone_management": false, 00:13:15.900 "zone_append": false, 00:13:15.900 "compare": false, 00:13:15.900 "compare_and_write": false, 00:13:15.900 "abort": true, 00:13:15.900 "seek_hole": false, 00:13:15.900 "seek_data": false, 00:13:15.900 "copy": true, 00:13:15.900 "nvme_iov_md": false 00:13:15.900 }, 00:13:15.900 "memory_domains": [ 00:13:15.900 { 00:13:15.900 "dma_device_id": "system", 00:13:15.900 "dma_device_type": 1 00:13:15.900 }, 00:13:15.900 { 00:13:15.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.900 "dma_device_type": 2 00:13:15.900 } 00:13:15.900 ], 00:13:15.900 "driver_specific": {} 00:13:15.900 } 00:13:15.900 ] 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.900 "name": "Existed_Raid", 00:13:15.900 "uuid": "f3381daf-acd0-4e00-bc0b-2806af98f964", 00:13:15.900 "strip_size_kb": 64, 00:13:15.900 "state": "configuring", 00:13:15.900 "raid_level": "concat", 00:13:15.900 "superblock": true, 00:13:15.900 "num_base_bdevs": 2, 00:13:15.900 "num_base_bdevs_discovered": 1, 00:13:15.900 "num_base_bdevs_operational": 2, 00:13:15.900 "base_bdevs_list": [ 00:13:15.900 { 00:13:15.900 "name": "BaseBdev1", 00:13:15.900 "uuid": "2f515730-d864-4cfe-8d58-3cae929d7ff1", 00:13:15.900 "is_configured": true, 00:13:15.900 "data_offset": 2048, 00:13:15.900 "data_size": 63488 00:13:15.900 }, 00:13:15.900 { 00:13:15.900 "name": "BaseBdev2", 00:13:15.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.900 "is_configured": false, 00:13:15.900 "data_offset": 0, 00:13:15.900 "data_size": 0 00:13:15.900 } 00:13:15.900 ] 00:13:15.900 }' 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.900 13:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.468 [2024-12-06 13:08:03.229256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:16.468 [2024-12-06 13:08:03.229595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.468 [2024-12-06 13:08:03.237309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.468 [2024-12-06 13:08:03.240042] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:16.468 [2024-12-06 13:08:03.240100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.468 "name": "Existed_Raid", 00:13:16.468 "uuid": "061984a7-5202-404d-80b5-83630519fb4b", 00:13:16.468 "strip_size_kb": 64, 00:13:16.468 "state": "configuring", 00:13:16.468 "raid_level": "concat", 00:13:16.468 "superblock": true, 00:13:16.468 "num_base_bdevs": 2, 00:13:16.468 "num_base_bdevs_discovered": 1, 00:13:16.468 "num_base_bdevs_operational": 2, 00:13:16.468 "base_bdevs_list": [ 00:13:16.468 { 00:13:16.468 "name": "BaseBdev1", 00:13:16.468 "uuid": "2f515730-d864-4cfe-8d58-3cae929d7ff1", 00:13:16.468 "is_configured": true, 00:13:16.468 "data_offset": 2048, 00:13:16.468 "data_size": 63488 00:13:16.468 }, 00:13:16.468 { 00:13:16.468 "name": "BaseBdev2", 00:13:16.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.468 "is_configured": false, 00:13:16.468 "data_offset": 0, 00:13:16.468 "data_size": 0 00:13:16.468 } 00:13:16.468 ] 00:13:16.468 }' 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.468 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.035 [2024-12-06 13:08:03.792730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.035 [2024-12-06 13:08:03.793323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:17.035 [2024-12-06 13:08:03.793351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:17.035 BaseBdev2 00:13:17.035 [2024-12-06 13:08:03.793707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:17.035 [2024-12-06 13:08:03.793953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:17.035 [2024-12-06 13:08:03.793986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:17.035 [2024-12-06 13:08:03.794161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.035 [ 00:13:17.035 { 00:13:17.035 "name": "BaseBdev2", 00:13:17.035 "aliases": [ 00:13:17.035 "cfcb97a6-c946-4bd5-bce5-99a7e82d93cd" 00:13:17.035 ], 00:13:17.035 "product_name": "Malloc disk", 00:13:17.035 "block_size": 512, 00:13:17.035 "num_blocks": 65536, 00:13:17.035 "uuid": "cfcb97a6-c946-4bd5-bce5-99a7e82d93cd", 00:13:17.035 "assigned_rate_limits": { 00:13:17.035 "rw_ios_per_sec": 0, 00:13:17.035 "rw_mbytes_per_sec": 0, 00:13:17.035 "r_mbytes_per_sec": 0, 00:13:17.035 "w_mbytes_per_sec": 0 00:13:17.035 }, 00:13:17.035 "claimed": true, 00:13:17.035 "claim_type": "exclusive_write", 00:13:17.035 "zoned": false, 00:13:17.035 "supported_io_types": { 00:13:17.035 "read": true, 00:13:17.035 "write": true, 00:13:17.035 "unmap": true, 00:13:17.035 "flush": true, 00:13:17.035 "reset": true, 00:13:17.035 "nvme_admin": false, 00:13:17.035 "nvme_io": false, 00:13:17.035 "nvme_io_md": false, 00:13:17.035 "write_zeroes": true, 00:13:17.035 "zcopy": true, 00:13:17.035 "get_zone_info": false, 00:13:17.035 "zone_management": false, 00:13:17.035 "zone_append": false, 00:13:17.035 "compare": false, 00:13:17.035 "compare_and_write": false, 00:13:17.035 "abort": true, 00:13:17.035 "seek_hole": false, 00:13:17.035 "seek_data": false, 00:13:17.035 "copy": true, 00:13:17.035 "nvme_iov_md": false 00:13:17.035 }, 00:13:17.035 "memory_domains": [ 00:13:17.035 { 00:13:17.035 "dma_device_id": "system", 00:13:17.035 "dma_device_type": 1 00:13:17.035 }, 00:13:17.035 { 00:13:17.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.035 "dma_device_type": 2 00:13:17.035 } 00:13:17.035 ], 00:13:17.035 "driver_specific": {} 00:13:17.035 } 00:13:17.035 ] 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.035 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.035 "name": "Existed_Raid", 00:13:17.035 "uuid": "061984a7-5202-404d-80b5-83630519fb4b", 00:13:17.035 "strip_size_kb": 64, 00:13:17.035 "state": "online", 00:13:17.035 "raid_level": "concat", 00:13:17.035 "superblock": true, 00:13:17.035 "num_base_bdevs": 2, 00:13:17.035 "num_base_bdevs_discovered": 2, 00:13:17.035 "num_base_bdevs_operational": 2, 00:13:17.035 "base_bdevs_list": [ 00:13:17.035 { 00:13:17.035 "name": "BaseBdev1", 00:13:17.035 "uuid": "2f515730-d864-4cfe-8d58-3cae929d7ff1", 00:13:17.035 "is_configured": true, 00:13:17.035 "data_offset": 2048, 00:13:17.035 "data_size": 63488 00:13:17.035 }, 00:13:17.035 { 00:13:17.035 "name": "BaseBdev2", 00:13:17.036 "uuid": "cfcb97a6-c946-4bd5-bce5-99a7e82d93cd", 00:13:17.036 "is_configured": true, 00:13:17.036 "data_offset": 2048, 00:13:17.036 "data_size": 63488 00:13:17.036 } 00:13:17.036 ] 00:13:17.036 }' 00:13:17.036 13:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.036 13:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.602 [2024-12-06 13:08:04.341276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.602 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:17.602 "name": "Existed_Raid", 00:13:17.602 "aliases": [ 00:13:17.602 "061984a7-5202-404d-80b5-83630519fb4b" 00:13:17.602 ], 00:13:17.602 "product_name": "Raid Volume", 00:13:17.602 "block_size": 512, 00:13:17.602 "num_blocks": 126976, 00:13:17.602 "uuid": "061984a7-5202-404d-80b5-83630519fb4b", 00:13:17.602 "assigned_rate_limits": { 00:13:17.602 "rw_ios_per_sec": 0, 00:13:17.602 "rw_mbytes_per_sec": 0, 00:13:17.602 "r_mbytes_per_sec": 0, 00:13:17.602 "w_mbytes_per_sec": 0 00:13:17.602 }, 00:13:17.602 "claimed": false, 00:13:17.602 "zoned": false, 00:13:17.602 "supported_io_types": { 00:13:17.602 "read": true, 00:13:17.602 "write": true, 00:13:17.602 "unmap": true, 00:13:17.602 "flush": true, 00:13:17.602 "reset": true, 00:13:17.602 "nvme_admin": false, 00:13:17.602 "nvme_io": false, 00:13:17.602 "nvme_io_md": false, 00:13:17.602 "write_zeroes": true, 00:13:17.602 "zcopy": false, 00:13:17.602 "get_zone_info": false, 00:13:17.602 "zone_management": false, 00:13:17.602 "zone_append": false, 00:13:17.602 "compare": false, 00:13:17.602 "compare_and_write": false, 00:13:17.602 "abort": false, 00:13:17.602 "seek_hole": false, 00:13:17.602 "seek_data": false, 00:13:17.602 "copy": false, 00:13:17.602 "nvme_iov_md": false 00:13:17.602 }, 00:13:17.602 "memory_domains": [ 00:13:17.602 { 00:13:17.602 "dma_device_id": "system", 00:13:17.602 "dma_device_type": 1 00:13:17.602 }, 00:13:17.602 { 00:13:17.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.602 "dma_device_type": 2 00:13:17.602 }, 00:13:17.602 { 00:13:17.602 "dma_device_id": "system", 00:13:17.602 "dma_device_type": 1 00:13:17.602 }, 00:13:17.602 { 00:13:17.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.602 "dma_device_type": 2 00:13:17.602 } 00:13:17.602 ], 00:13:17.602 "driver_specific": { 00:13:17.602 "raid": { 00:13:17.602 "uuid": "061984a7-5202-404d-80b5-83630519fb4b", 00:13:17.602 "strip_size_kb": 64, 00:13:17.602 "state": "online", 00:13:17.602 "raid_level": "concat", 00:13:17.603 "superblock": true, 00:13:17.603 "num_base_bdevs": 2, 00:13:17.603 "num_base_bdevs_discovered": 2, 00:13:17.603 "num_base_bdevs_operational": 2, 00:13:17.603 "base_bdevs_list": [ 00:13:17.603 { 00:13:17.603 "name": "BaseBdev1", 00:13:17.603 "uuid": "2f515730-d864-4cfe-8d58-3cae929d7ff1", 00:13:17.603 "is_configured": true, 00:13:17.603 "data_offset": 2048, 00:13:17.603 "data_size": 63488 00:13:17.603 }, 00:13:17.603 { 00:13:17.603 "name": "BaseBdev2", 00:13:17.603 "uuid": "cfcb97a6-c946-4bd5-bce5-99a7e82d93cd", 00:13:17.603 "is_configured": true, 00:13:17.603 "data_offset": 2048, 00:13:17.603 "data_size": 63488 00:13:17.603 } 00:13:17.603 ] 00:13:17.603 } 00:13:17.603 } 00:13:17.603 }' 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:17.603 BaseBdev2' 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.603 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.603 [2024-12-06 13:08:04.593090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:17.603 [2024-12-06 13:08:04.593152] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.603 [2024-12-06 13:08:04.593225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.861 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.861 "name": "Existed_Raid", 00:13:17.861 "uuid": "061984a7-5202-404d-80b5-83630519fb4b", 00:13:17.861 "strip_size_kb": 64, 00:13:17.861 "state": "offline", 00:13:17.861 "raid_level": "concat", 00:13:17.861 "superblock": true, 00:13:17.861 "num_base_bdevs": 2, 00:13:17.861 "num_base_bdevs_discovered": 1, 00:13:17.861 "num_base_bdevs_operational": 1, 00:13:17.861 "base_bdevs_list": [ 00:13:17.861 { 00:13:17.861 "name": null, 00:13:17.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.861 "is_configured": false, 00:13:17.861 "data_offset": 0, 00:13:17.861 "data_size": 63488 00:13:17.862 }, 00:13:17.862 { 00:13:17.862 "name": "BaseBdev2", 00:13:17.862 "uuid": "cfcb97a6-c946-4bd5-bce5-99a7e82d93cd", 00:13:17.862 "is_configured": true, 00:13:17.862 "data_offset": 2048, 00:13:17.862 "data_size": 63488 00:13:17.862 } 00:13:17.862 ] 00:13:17.862 }' 00:13:17.862 13:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.862 13:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.429 [2024-12-06 13:08:05.281449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.429 [2024-12-06 13:08:05.281552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62146 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62146 ']' 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62146 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:18.429 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.687 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62146 00:13:18.687 killing process with pid 62146 00:13:18.687 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.687 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.687 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62146' 00:13:18.687 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62146 00:13:18.687 [2024-12-06 13:08:05.476120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.687 13:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62146 00:13:18.687 [2024-12-06 13:08:05.490981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.679 13:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:19.679 00:13:19.679 real 0m5.684s 00:13:19.679 user 0m8.536s 00:13:19.679 sys 0m0.872s 00:13:19.679 13:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.679 ************************************ 00:13:19.679 END TEST raid_state_function_test_sb 00:13:19.679 ************************************ 00:13:19.679 13:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.679 13:08:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:13:19.679 13:08:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:19.679 13:08:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.679 13:08:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.679 ************************************ 00:13:19.679 START TEST raid_superblock_test 00:13:19.679 ************************************ 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62398 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62398 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62398 ']' 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:19.679 13:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.938 [2024-12-06 13:08:06.756484] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:19.938 [2024-12-06 13:08:06.756893] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62398 ] 00:13:19.938 [2024-12-06 13:08:06.945204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.197 [2024-12-06 13:08:07.081916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.455 [2024-12-06 13:08:07.291223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.455 [2024-12-06 13:08:07.291303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.714 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 malloc1 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 [2024-12-06 13:08:07.775402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:20.973 [2024-12-06 13:08:07.775802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.973 [2024-12-06 13:08:07.775887] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:20.973 [2024-12-06 13:08:07.776030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.973 [2024-12-06 13:08:07.779287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.973 [2024-12-06 13:08:07.779498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:20.973 pt1 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 malloc2 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.973 [2024-12-06 13:08:07.834138] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:20.973 [2024-12-06 13:08:07.834234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.973 [2024-12-06 13:08:07.834269] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:20.973 [2024-12-06 13:08:07.834284] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.973 [2024-12-06 13:08:07.837306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.973 [2024-12-06 13:08:07.837349] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:20.973 pt2 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:20.973 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.974 [2024-12-06 13:08:07.842247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:20.974 [2024-12-06 13:08:07.844970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:20.974 [2024-12-06 13:08:07.845295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:20.974 [2024-12-06 13:08:07.845423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:20.974 [2024-12-06 13:08:07.845866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:20.974 [2024-12-06 13:08:07.846197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:20.974 [2024-12-06 13:08:07.846329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:20.974 [2024-12-06 13:08:07.846802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.974 "name": "raid_bdev1", 00:13:20.974 "uuid": "ecd52ca4-111a-4753-8c38-42d85f4417f7", 00:13:20.974 "strip_size_kb": 64, 00:13:20.974 "state": "online", 00:13:20.974 "raid_level": "concat", 00:13:20.974 "superblock": true, 00:13:20.974 "num_base_bdevs": 2, 00:13:20.974 "num_base_bdevs_discovered": 2, 00:13:20.974 "num_base_bdevs_operational": 2, 00:13:20.974 "base_bdevs_list": [ 00:13:20.974 { 00:13:20.974 "name": "pt1", 00:13:20.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:20.974 "is_configured": true, 00:13:20.974 "data_offset": 2048, 00:13:20.974 "data_size": 63488 00:13:20.974 }, 00:13:20.974 { 00:13:20.974 "name": "pt2", 00:13:20.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:20.974 "is_configured": true, 00:13:20.974 "data_offset": 2048, 00:13:20.974 "data_size": 63488 00:13:20.974 } 00:13:20.974 ] 00:13:20.974 }' 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.974 13:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.539 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:21.539 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:21.539 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:21.539 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:21.539 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:21.539 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:21.539 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:21.540 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:21.540 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.540 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.540 [2024-12-06 13:08:08.431270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.540 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.540 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:21.540 "name": "raid_bdev1", 00:13:21.540 "aliases": [ 00:13:21.540 "ecd52ca4-111a-4753-8c38-42d85f4417f7" 00:13:21.540 ], 00:13:21.540 "product_name": "Raid Volume", 00:13:21.540 "block_size": 512, 00:13:21.540 "num_blocks": 126976, 00:13:21.540 "uuid": "ecd52ca4-111a-4753-8c38-42d85f4417f7", 00:13:21.540 "assigned_rate_limits": { 00:13:21.540 "rw_ios_per_sec": 0, 00:13:21.540 "rw_mbytes_per_sec": 0, 00:13:21.540 "r_mbytes_per_sec": 0, 00:13:21.540 "w_mbytes_per_sec": 0 00:13:21.540 }, 00:13:21.540 "claimed": false, 00:13:21.540 "zoned": false, 00:13:21.540 "supported_io_types": { 00:13:21.540 "read": true, 00:13:21.540 "write": true, 00:13:21.540 "unmap": true, 00:13:21.540 "flush": true, 00:13:21.540 "reset": true, 00:13:21.540 "nvme_admin": false, 00:13:21.540 "nvme_io": false, 00:13:21.540 "nvme_io_md": false, 00:13:21.540 "write_zeroes": true, 00:13:21.540 "zcopy": false, 00:13:21.540 "get_zone_info": false, 00:13:21.540 "zone_management": false, 00:13:21.540 "zone_append": false, 00:13:21.540 "compare": false, 00:13:21.540 "compare_and_write": false, 00:13:21.540 "abort": false, 00:13:21.540 "seek_hole": false, 00:13:21.540 "seek_data": false, 00:13:21.540 "copy": false, 00:13:21.540 "nvme_iov_md": false 00:13:21.540 }, 00:13:21.540 "memory_domains": [ 00:13:21.540 { 00:13:21.540 "dma_device_id": "system", 00:13:21.540 "dma_device_type": 1 00:13:21.540 }, 00:13:21.540 { 00:13:21.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.540 "dma_device_type": 2 00:13:21.540 }, 00:13:21.540 { 00:13:21.540 "dma_device_id": "system", 00:13:21.540 "dma_device_type": 1 00:13:21.540 }, 00:13:21.540 { 00:13:21.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.540 "dma_device_type": 2 00:13:21.540 } 00:13:21.540 ], 00:13:21.540 "driver_specific": { 00:13:21.540 "raid": { 00:13:21.540 "uuid": "ecd52ca4-111a-4753-8c38-42d85f4417f7", 00:13:21.540 "strip_size_kb": 64, 00:13:21.540 "state": "online", 00:13:21.540 "raid_level": "concat", 00:13:21.540 "superblock": true, 00:13:21.540 "num_base_bdevs": 2, 00:13:21.540 "num_base_bdevs_discovered": 2, 00:13:21.540 "num_base_bdevs_operational": 2, 00:13:21.540 "base_bdevs_list": [ 00:13:21.540 { 00:13:21.540 "name": "pt1", 00:13:21.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:21.540 "is_configured": true, 00:13:21.540 "data_offset": 2048, 00:13:21.540 "data_size": 63488 00:13:21.540 }, 00:13:21.540 { 00:13:21.540 "name": "pt2", 00:13:21.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:21.540 "is_configured": true, 00:13:21.540 "data_offset": 2048, 00:13:21.540 "data_size": 63488 00:13:21.540 } 00:13:21.540 ] 00:13:21.540 } 00:13:21.540 } 00:13:21.540 }' 00:13:21.540 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:21.540 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:21.540 pt2' 00:13:21.540 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.798 [2024-12-06 13:08:08.691328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ecd52ca4-111a-4753-8c38-42d85f4417f7 00:13:21.798 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ecd52ca4-111a-4753-8c38-42d85f4417f7 ']' 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.799 [2024-12-06 13:08:08.738973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:21.799 [2024-12-06 13:08:08.739134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:21.799 [2024-12-06 13:08:08.739281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.799 [2024-12-06 13:08:08.739355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:21.799 [2024-12-06 13:08:08.739386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.799 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.058 [2024-12-06 13:08:08.883038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:22.058 [2024-12-06 13:08:08.885718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:22.058 [2024-12-06 13:08:08.885924] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:22.058 [2024-12-06 13:08:08.886152] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:22.058 [2024-12-06 13:08:08.886362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.058 [2024-12-06 13:08:08.886387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:22.058 request: 00:13:22.058 { 00:13:22.058 "name": "raid_bdev1", 00:13:22.058 "raid_level": "concat", 00:13:22.058 "base_bdevs": [ 00:13:22.058 "malloc1", 00:13:22.058 "malloc2" 00:13:22.058 ], 00:13:22.058 "strip_size_kb": 64, 00:13:22.058 "superblock": false, 00:13:22.058 "method": "bdev_raid_create", 00:13:22.058 "req_id": 1 00:13:22.058 } 00:13:22.058 Got JSON-RPC error response 00:13:22.058 response: 00:13:22.058 { 00:13:22.058 "code": -17, 00:13:22.058 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:22.058 } 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.058 [2024-12-06 13:08:08.947135] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:22.058 [2024-12-06 13:08:08.947356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.058 [2024-12-06 13:08:08.947427] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:22.058 [2024-12-06 13:08:08.947562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.058 [2024-12-06 13:08:08.950433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.058 [2024-12-06 13:08:08.950629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:22.058 [2024-12-06 13:08:08.950839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:22.058 [2024-12-06 13:08:08.951059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:22.058 pt1 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.058 13:08:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.058 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.058 "name": "raid_bdev1", 00:13:22.058 "uuid": "ecd52ca4-111a-4753-8c38-42d85f4417f7", 00:13:22.058 "strip_size_kb": 64, 00:13:22.058 "state": "configuring", 00:13:22.058 "raid_level": "concat", 00:13:22.058 "superblock": true, 00:13:22.058 "num_base_bdevs": 2, 00:13:22.058 "num_base_bdevs_discovered": 1, 00:13:22.058 "num_base_bdevs_operational": 2, 00:13:22.058 "base_bdevs_list": [ 00:13:22.058 { 00:13:22.058 "name": "pt1", 00:13:22.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:22.058 "is_configured": true, 00:13:22.058 "data_offset": 2048, 00:13:22.058 "data_size": 63488 00:13:22.058 }, 00:13:22.058 { 00:13:22.058 "name": null, 00:13:22.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:22.058 "is_configured": false, 00:13:22.058 "data_offset": 2048, 00:13:22.058 "data_size": 63488 00:13:22.058 } 00:13:22.058 ] 00:13:22.058 }' 00:13:22.058 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.058 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.624 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:22.624 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:22.624 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:22.624 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:22.624 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.624 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.624 [2024-12-06 13:08:09.487631] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:22.624 [2024-12-06 13:08:09.487750] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.624 [2024-12-06 13:08:09.487785] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:22.624 [2024-12-06 13:08:09.487804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.624 [2024-12-06 13:08:09.488466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.624 [2024-12-06 13:08:09.488529] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:22.624 [2024-12-06 13:08:09.488638] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:22.624 [2024-12-06 13:08:09.488681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:22.624 [2024-12-06 13:08:09.488834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:22.624 [2024-12-06 13:08:09.488864] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:22.624 [2024-12-06 13:08:09.489180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:22.624 [2024-12-06 13:08:09.489370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:22.624 [2024-12-06 13:08:09.489392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:22.624 [2024-12-06 13:08:09.489601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.624 pt2 00:13:22.624 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.624 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:22.624 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:22.624 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:22.624 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.625 "name": "raid_bdev1", 00:13:22.625 "uuid": "ecd52ca4-111a-4753-8c38-42d85f4417f7", 00:13:22.625 "strip_size_kb": 64, 00:13:22.625 "state": "online", 00:13:22.625 "raid_level": "concat", 00:13:22.625 "superblock": true, 00:13:22.625 "num_base_bdevs": 2, 00:13:22.625 "num_base_bdevs_discovered": 2, 00:13:22.625 "num_base_bdevs_operational": 2, 00:13:22.625 "base_bdevs_list": [ 00:13:22.625 { 00:13:22.625 "name": "pt1", 00:13:22.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:22.625 "is_configured": true, 00:13:22.625 "data_offset": 2048, 00:13:22.625 "data_size": 63488 00:13:22.625 }, 00:13:22.625 { 00:13:22.625 "name": "pt2", 00:13:22.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:22.625 "is_configured": true, 00:13:22.625 "data_offset": 2048, 00:13:22.625 "data_size": 63488 00:13:22.625 } 00:13:22.625 ] 00:13:22.625 }' 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.625 13:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:23.192 [2024-12-06 13:08:10.028135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.192 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:23.192 "name": "raid_bdev1", 00:13:23.192 "aliases": [ 00:13:23.192 "ecd52ca4-111a-4753-8c38-42d85f4417f7" 00:13:23.192 ], 00:13:23.192 "product_name": "Raid Volume", 00:13:23.192 "block_size": 512, 00:13:23.192 "num_blocks": 126976, 00:13:23.192 "uuid": "ecd52ca4-111a-4753-8c38-42d85f4417f7", 00:13:23.192 "assigned_rate_limits": { 00:13:23.192 "rw_ios_per_sec": 0, 00:13:23.192 "rw_mbytes_per_sec": 0, 00:13:23.192 "r_mbytes_per_sec": 0, 00:13:23.192 "w_mbytes_per_sec": 0 00:13:23.192 }, 00:13:23.192 "claimed": false, 00:13:23.192 "zoned": false, 00:13:23.192 "supported_io_types": { 00:13:23.192 "read": true, 00:13:23.192 "write": true, 00:13:23.192 "unmap": true, 00:13:23.192 "flush": true, 00:13:23.192 "reset": true, 00:13:23.192 "nvme_admin": false, 00:13:23.192 "nvme_io": false, 00:13:23.192 "nvme_io_md": false, 00:13:23.192 "write_zeroes": true, 00:13:23.192 "zcopy": false, 00:13:23.192 "get_zone_info": false, 00:13:23.192 "zone_management": false, 00:13:23.192 "zone_append": false, 00:13:23.192 "compare": false, 00:13:23.192 "compare_and_write": false, 00:13:23.192 "abort": false, 00:13:23.192 "seek_hole": false, 00:13:23.192 "seek_data": false, 00:13:23.192 "copy": false, 00:13:23.192 "nvme_iov_md": false 00:13:23.192 }, 00:13:23.192 "memory_domains": [ 00:13:23.192 { 00:13:23.192 "dma_device_id": "system", 00:13:23.192 "dma_device_type": 1 00:13:23.192 }, 00:13:23.192 { 00:13:23.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.192 "dma_device_type": 2 00:13:23.192 }, 00:13:23.192 { 00:13:23.192 "dma_device_id": "system", 00:13:23.192 "dma_device_type": 1 00:13:23.192 }, 00:13:23.192 { 00:13:23.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.192 "dma_device_type": 2 00:13:23.192 } 00:13:23.192 ], 00:13:23.192 "driver_specific": { 00:13:23.192 "raid": { 00:13:23.192 "uuid": "ecd52ca4-111a-4753-8c38-42d85f4417f7", 00:13:23.192 "strip_size_kb": 64, 00:13:23.192 "state": "online", 00:13:23.192 "raid_level": "concat", 00:13:23.192 "superblock": true, 00:13:23.192 "num_base_bdevs": 2, 00:13:23.192 "num_base_bdevs_discovered": 2, 00:13:23.192 "num_base_bdevs_operational": 2, 00:13:23.192 "base_bdevs_list": [ 00:13:23.192 { 00:13:23.192 "name": "pt1", 00:13:23.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:23.192 "is_configured": true, 00:13:23.193 "data_offset": 2048, 00:13:23.193 "data_size": 63488 00:13:23.193 }, 00:13:23.193 { 00:13:23.193 "name": "pt2", 00:13:23.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:23.193 "is_configured": true, 00:13:23.193 "data_offset": 2048, 00:13:23.193 "data_size": 63488 00:13:23.193 } 00:13:23.193 ] 00:13:23.193 } 00:13:23.193 } 00:13:23.193 }' 00:13:23.193 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:23.193 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:23.193 pt2' 00:13:23.193 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.193 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:23.193 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.193 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:23.193 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.193 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.193 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.450 [2024-12-06 13:08:10.332223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ecd52ca4-111a-4753-8c38-42d85f4417f7 '!=' ecd52ca4-111a-4753-8c38-42d85f4417f7 ']' 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62398 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62398 ']' 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62398 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62398 00:13:23.450 killing process with pid 62398 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62398' 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62398 00:13:23.450 13:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62398 00:13:23.450 [2024-12-06 13:08:10.425751] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.450 [2024-12-06 13:08:10.425922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.450 [2024-12-06 13:08:10.426012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.450 [2024-12-06 13:08:10.426037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:23.707 [2024-12-06 13:08:10.666105] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.077 ************************************ 00:13:25.077 END TEST raid_superblock_test 00:13:25.077 ************************************ 00:13:25.077 13:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:25.077 00:13:25.077 real 0m5.314s 00:13:25.077 user 0m7.694s 00:13:25.077 sys 0m0.754s 00:13:25.077 13:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.077 13:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.077 13:08:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:13:25.077 13:08:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:25.077 13:08:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.077 13:08:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.078 ************************************ 00:13:25.078 START TEST raid_read_error_test 00:13:25.078 ************************************ 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9vcUKUW8S2 00:13:25.078 13:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62615 00:13:25.078 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62615 00:13:25.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.078 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62615 ']' 00:13:25.078 13:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:25.078 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.078 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.078 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.078 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.078 13:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.335 [2024-12-06 13:08:12.125866] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:25.335 [2024-12-06 13:08:12.127184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62615 ] 00:13:25.335 [2024-12-06 13:08:12.321935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.591 [2024-12-06 13:08:12.502459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.848 [2024-12-06 13:08:12.758582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.848 [2024-12-06 13:08:12.758655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.412 BaseBdev1_malloc 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.412 true 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.412 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.413 [2024-12-06 13:08:13.217554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:26.413 [2024-12-06 13:08:13.217624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.413 [2024-12-06 13:08:13.217654] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:26.413 [2024-12-06 13:08:13.217672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.413 [2024-12-06 13:08:13.220436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.413 [2024-12-06 13:08:13.220496] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:26.413 BaseBdev1 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.413 BaseBdev2_malloc 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.413 true 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.413 [2024-12-06 13:08:13.281557] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:26.413 [2024-12-06 13:08:13.281624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.413 [2024-12-06 13:08:13.281650] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:26.413 [2024-12-06 13:08:13.281668] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.413 [2024-12-06 13:08:13.284517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.413 [2024-12-06 13:08:13.284567] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:26.413 BaseBdev2 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.413 [2024-12-06 13:08:13.289632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.413 [2024-12-06 13:08:13.292140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.413 [2024-12-06 13:08:13.292401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:26.413 [2024-12-06 13:08:13.292425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:26.413 [2024-12-06 13:08:13.292738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:26.413 [2024-12-06 13:08:13.292965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:26.413 [2024-12-06 13:08:13.292987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:26.413 [2024-12-06 13:08:13.293196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.413 "name": "raid_bdev1", 00:13:26.413 "uuid": "1cfd53fd-4826-4020-900c-f4601cccbdbc", 00:13:26.413 "strip_size_kb": 64, 00:13:26.413 "state": "online", 00:13:26.413 "raid_level": "concat", 00:13:26.413 "superblock": true, 00:13:26.413 "num_base_bdevs": 2, 00:13:26.413 "num_base_bdevs_discovered": 2, 00:13:26.413 "num_base_bdevs_operational": 2, 00:13:26.413 "base_bdevs_list": [ 00:13:26.413 { 00:13:26.413 "name": "BaseBdev1", 00:13:26.413 "uuid": "d17ff7c9-7448-5b81-b7e8-1c0d9dce7a9d", 00:13:26.413 "is_configured": true, 00:13:26.413 "data_offset": 2048, 00:13:26.413 "data_size": 63488 00:13:26.413 }, 00:13:26.413 { 00:13:26.413 "name": "BaseBdev2", 00:13:26.413 "uuid": "92d81c37-a44b-558b-940d-e9bd644d8808", 00:13:26.413 "is_configured": true, 00:13:26.413 "data_offset": 2048, 00:13:26.413 "data_size": 63488 00:13:26.413 } 00:13:26.413 ] 00:13:26.413 }' 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.413 13:08:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.978 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:26.978 13:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:26.978 [2024-12-06 13:08:13.871373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.961 "name": "raid_bdev1", 00:13:27.961 "uuid": "1cfd53fd-4826-4020-900c-f4601cccbdbc", 00:13:27.961 "strip_size_kb": 64, 00:13:27.961 "state": "online", 00:13:27.961 "raid_level": "concat", 00:13:27.961 "superblock": true, 00:13:27.961 "num_base_bdevs": 2, 00:13:27.961 "num_base_bdevs_discovered": 2, 00:13:27.961 "num_base_bdevs_operational": 2, 00:13:27.961 "base_bdevs_list": [ 00:13:27.961 { 00:13:27.961 "name": "BaseBdev1", 00:13:27.961 "uuid": "d17ff7c9-7448-5b81-b7e8-1c0d9dce7a9d", 00:13:27.961 "is_configured": true, 00:13:27.961 "data_offset": 2048, 00:13:27.961 "data_size": 63488 00:13:27.961 }, 00:13:27.961 { 00:13:27.961 "name": "BaseBdev2", 00:13:27.961 "uuid": "92d81c37-a44b-558b-940d-e9bd644d8808", 00:13:27.961 "is_configured": true, 00:13:27.961 "data_offset": 2048, 00:13:27.961 "data_size": 63488 00:13:27.961 } 00:13:27.961 ] 00:13:27.961 }' 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.961 13:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.527 [2024-12-06 13:08:15.290968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.527 [2024-12-06 13:08:15.291144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.527 [2024-12-06 13:08:15.294809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.527 [2024-12-06 13:08:15.294995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.527 [2024-12-06 13:08:15.295054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.527 [2024-12-06 13:08:15.295078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:28.527 { 00:13:28.527 "results": [ 00:13:28.527 { 00:13:28.527 "job": "raid_bdev1", 00:13:28.527 "core_mask": "0x1", 00:13:28.527 "workload": "randrw", 00:13:28.527 "percentage": 50, 00:13:28.527 "status": "finished", 00:13:28.527 "queue_depth": 1, 00:13:28.527 "io_size": 131072, 00:13:28.527 "runtime": 1.417448, 00:13:28.527 "iops": 9941.105423267732, 00:13:28.527 "mibps": 1242.6381779084666, 00:13:28.527 "io_failed": 1, 00:13:28.527 "io_timeout": 0, 00:13:28.527 "avg_latency_us": 140.30457925838002, 00:13:28.527 "min_latency_us": 37.93454545454546, 00:13:28.527 "max_latency_us": 1854.370909090909 00:13:28.527 } 00:13:28.527 ], 00:13:28.527 "core_count": 1 00:13:28.527 } 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62615 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62615 ']' 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62615 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62615 00:13:28.527 killing process with pid 62615 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62615' 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62615 00:13:28.527 [2024-12-06 13:08:15.333522] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.527 13:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62615 00:13:28.527 [2024-12-06 13:08:15.460004] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.900 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9vcUKUW8S2 00:13:29.900 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:29.900 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:29.900 ************************************ 00:13:29.900 END TEST raid_read_error_test 00:13:29.900 ************************************ 00:13:29.900 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:29.900 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:29.900 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:29.900 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:29.900 13:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:29.900 00:13:29.900 real 0m4.615s 00:13:29.900 user 0m5.702s 00:13:29.901 sys 0m0.623s 00:13:29.901 13:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.901 13:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.901 13:08:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:13:29.901 13:08:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:29.901 13:08:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.901 13:08:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.901 ************************************ 00:13:29.901 START TEST raid_write_error_test 00:13:29.901 ************************************ 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.32BFNr7jbG 00:13:29.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62766 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62766 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62766 ']' 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.901 13:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.901 [2024-12-06 13:08:16.781488] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:29.901 [2024-12-06 13:08:16.781678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62766 ] 00:13:30.157 [2024-12-06 13:08:16.974401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.158 [2024-12-06 13:08:17.138276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.414 [2024-12-06 13:08:17.349689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.414 [2024-12-06 13:08:17.349761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.979 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.979 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:30.979 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.979 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:30.979 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.979 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.979 BaseBdev1_malloc 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.980 true 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.980 [2024-12-06 13:08:17.823688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:30.980 [2024-12-06 13:08:17.823757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.980 [2024-12-06 13:08:17.823787] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:30.980 [2024-12-06 13:08:17.823805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.980 [2024-12-06 13:08:17.826662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.980 [2024-12-06 13:08:17.826718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:30.980 BaseBdev1 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.980 BaseBdev2_malloc 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.980 true 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.980 [2024-12-06 13:08:17.879609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:30.980 [2024-12-06 13:08:17.879706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.980 [2024-12-06 13:08:17.879736] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:30.980 [2024-12-06 13:08:17.879753] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.980 [2024-12-06 13:08:17.882826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.980 [2024-12-06 13:08:17.882890] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:30.980 BaseBdev2 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.980 [2024-12-06 13:08:17.887775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.980 [2024-12-06 13:08:17.890315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.980 [2024-12-06 13:08:17.890822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:30.980 [2024-12-06 13:08:17.890886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:30.980 [2024-12-06 13:08:17.891203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:30.980 [2024-12-06 13:08:17.891496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:30.980 [2024-12-06 13:08:17.891518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:30.980 [2024-12-06 13:08:17.891779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.980 "name": "raid_bdev1", 00:13:30.980 "uuid": "95c78f92-68b0-4338-8745-a06fa34526bf", 00:13:30.980 "strip_size_kb": 64, 00:13:30.980 "state": "online", 00:13:30.980 "raid_level": "concat", 00:13:30.980 "superblock": true, 00:13:30.980 "num_base_bdevs": 2, 00:13:30.980 "num_base_bdevs_discovered": 2, 00:13:30.980 "num_base_bdevs_operational": 2, 00:13:30.980 "base_bdevs_list": [ 00:13:30.980 { 00:13:30.980 "name": "BaseBdev1", 00:13:30.980 "uuid": "06261529-5591-5fe2-9885-eff7c713c9b8", 00:13:30.980 "is_configured": true, 00:13:30.980 "data_offset": 2048, 00:13:30.980 "data_size": 63488 00:13:30.980 }, 00:13:30.980 { 00:13:30.980 "name": "BaseBdev2", 00:13:30.980 "uuid": "d9732fc9-1c7a-560c-a00a-fb2094db6447", 00:13:30.980 "is_configured": true, 00:13:30.980 "data_offset": 2048, 00:13:30.980 "data_size": 63488 00:13:30.980 } 00:13:30.980 ] 00:13:30.980 }' 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.980 13:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.545 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:31.545 13:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:31.545 [2024-12-06 13:08:18.533522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.554 "name": "raid_bdev1", 00:13:32.554 "uuid": "95c78f92-68b0-4338-8745-a06fa34526bf", 00:13:32.554 "strip_size_kb": 64, 00:13:32.554 "state": "online", 00:13:32.554 "raid_level": "concat", 00:13:32.554 "superblock": true, 00:13:32.554 "num_base_bdevs": 2, 00:13:32.554 "num_base_bdevs_discovered": 2, 00:13:32.554 "num_base_bdevs_operational": 2, 00:13:32.554 "base_bdevs_list": [ 00:13:32.554 { 00:13:32.554 "name": "BaseBdev1", 00:13:32.554 "uuid": "06261529-5591-5fe2-9885-eff7c713c9b8", 00:13:32.554 "is_configured": true, 00:13:32.554 "data_offset": 2048, 00:13:32.554 "data_size": 63488 00:13:32.554 }, 00:13:32.554 { 00:13:32.554 "name": "BaseBdev2", 00:13:32.554 "uuid": "d9732fc9-1c7a-560c-a00a-fb2094db6447", 00:13:32.554 "is_configured": true, 00:13:32.554 "data_offset": 2048, 00:13:32.554 "data_size": 63488 00:13:32.554 } 00:13:32.554 ] 00:13:32.554 }' 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.554 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.122 13:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:33.122 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.122 13:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.122 [2024-12-06 13:08:19.993333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:33.122 [2024-12-06 13:08:19.993373] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.122 [2024-12-06 13:08:19.997330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.122 { 00:13:33.122 "results": [ 00:13:33.122 { 00:13:33.122 "job": "raid_bdev1", 00:13:33.122 "core_mask": "0x1", 00:13:33.122 "workload": "randrw", 00:13:33.122 "percentage": 50, 00:13:33.122 "status": "finished", 00:13:33.122 "queue_depth": 1, 00:13:33.122 "io_size": 131072, 00:13:33.122 "runtime": 1.457634, 00:13:33.122 "iops": 9903.034643813193, 00:13:33.122 "mibps": 1237.879330476649, 00:13:33.122 "io_failed": 1, 00:13:33.122 "io_timeout": 0, 00:13:33.122 "avg_latency_us": 140.48812514169123, 00:13:33.122 "min_latency_us": 39.56363636363636, 00:13:33.122 "max_latency_us": 1936.290909090909 00:13:33.122 } 00:13:33.122 ], 00:13:33.122 "core_count": 1 00:13:33.122 } 00:13:33.122 [2024-12-06 13:08:19.997617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.122 [2024-12-06 13:08:19.997677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.122 [2024-12-06 13:08:19.997702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62766 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62766 ']' 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62766 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62766 00:13:33.122 killing process with pid 62766 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62766' 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62766 00:13:33.122 [2024-12-06 13:08:20.038225] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.122 13:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62766 00:13:33.380 [2024-12-06 13:08:20.168876] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.315 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.32BFNr7jbG 00:13:34.315 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:34.315 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:34.315 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:13:34.315 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:34.315 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:34.315 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:34.315 13:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:13:34.315 00:13:34.315 real 0m4.658s 00:13:34.315 user 0m5.865s 00:13:34.315 sys 0m0.570s 00:13:34.315 13:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.315 13:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.315 ************************************ 00:13:34.315 END TEST raid_write_error_test 00:13:34.315 ************************************ 00:13:34.573 13:08:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:34.573 13:08:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:13:34.573 13:08:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:34.573 13:08:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.573 13:08:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.573 ************************************ 00:13:34.573 START TEST raid_state_function_test 00:13:34.573 ************************************ 00:13:34.573 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:13:34.573 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:34.573 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:34.573 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:34.574 Process raid pid: 62910 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62910 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62910' 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62910 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62910 ']' 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.574 13:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.574 [2024-12-06 13:08:21.493217] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:34.574 [2024-12-06 13:08:21.493410] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.833 [2024-12-06 13:08:21.682651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.833 [2024-12-06 13:08:21.816870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.091 [2024-12-06 13:08:22.046708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.091 [2024-12-06 13:08:22.046773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.662 [2024-12-06 13:08:22.520125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:35.662 [2024-12-06 13:08:22.520194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:35.662 [2024-12-06 13:08:22.520211] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.662 [2024-12-06 13:08:22.520227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.662 "name": "Existed_Raid", 00:13:35.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.662 "strip_size_kb": 0, 00:13:35.662 "state": "configuring", 00:13:35.662 "raid_level": "raid1", 00:13:35.662 "superblock": false, 00:13:35.662 "num_base_bdevs": 2, 00:13:35.662 "num_base_bdevs_discovered": 0, 00:13:35.662 "num_base_bdevs_operational": 2, 00:13:35.662 "base_bdevs_list": [ 00:13:35.662 { 00:13:35.662 "name": "BaseBdev1", 00:13:35.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.662 "is_configured": false, 00:13:35.662 "data_offset": 0, 00:13:35.662 "data_size": 0 00:13:35.662 }, 00:13:35.662 { 00:13:35.662 "name": "BaseBdev2", 00:13:35.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.662 "is_configured": false, 00:13:35.662 "data_offset": 0, 00:13:35.662 "data_size": 0 00:13:35.662 } 00:13:35.662 ] 00:13:35.662 }' 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.662 13:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.233 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:36.233 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.233 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.233 [2024-12-06 13:08:23.048265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:36.233 [2024-12-06 13:08:23.048448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:36.233 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.234 [2024-12-06 13:08:23.060197] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:36.234 [2024-12-06 13:08:23.060413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:36.234 [2024-12-06 13:08:23.060439] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:36.234 [2024-12-06 13:08:23.060461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.234 [2024-12-06 13:08:23.105198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.234 BaseBdev1 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.234 [ 00:13:36.234 { 00:13:36.234 "name": "BaseBdev1", 00:13:36.234 "aliases": [ 00:13:36.234 "b2776c7e-f7a7-4bf0-84a1-ddc2a7cc1c23" 00:13:36.234 ], 00:13:36.234 "product_name": "Malloc disk", 00:13:36.234 "block_size": 512, 00:13:36.234 "num_blocks": 65536, 00:13:36.234 "uuid": "b2776c7e-f7a7-4bf0-84a1-ddc2a7cc1c23", 00:13:36.234 "assigned_rate_limits": { 00:13:36.234 "rw_ios_per_sec": 0, 00:13:36.234 "rw_mbytes_per_sec": 0, 00:13:36.234 "r_mbytes_per_sec": 0, 00:13:36.234 "w_mbytes_per_sec": 0 00:13:36.234 }, 00:13:36.234 "claimed": true, 00:13:36.234 "claim_type": "exclusive_write", 00:13:36.234 "zoned": false, 00:13:36.234 "supported_io_types": { 00:13:36.234 "read": true, 00:13:36.234 "write": true, 00:13:36.234 "unmap": true, 00:13:36.234 "flush": true, 00:13:36.234 "reset": true, 00:13:36.234 "nvme_admin": false, 00:13:36.234 "nvme_io": false, 00:13:36.234 "nvme_io_md": false, 00:13:36.234 "write_zeroes": true, 00:13:36.234 "zcopy": true, 00:13:36.234 "get_zone_info": false, 00:13:36.234 "zone_management": false, 00:13:36.234 "zone_append": false, 00:13:36.234 "compare": false, 00:13:36.234 "compare_and_write": false, 00:13:36.234 "abort": true, 00:13:36.234 "seek_hole": false, 00:13:36.234 "seek_data": false, 00:13:36.234 "copy": true, 00:13:36.234 "nvme_iov_md": false 00:13:36.234 }, 00:13:36.234 "memory_domains": [ 00:13:36.234 { 00:13:36.234 "dma_device_id": "system", 00:13:36.234 "dma_device_type": 1 00:13:36.234 }, 00:13:36.234 { 00:13:36.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.234 "dma_device_type": 2 00:13:36.234 } 00:13:36.234 ], 00:13:36.234 "driver_specific": {} 00:13:36.234 } 00:13:36.234 ] 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.234 "name": "Existed_Raid", 00:13:36.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.234 "strip_size_kb": 0, 00:13:36.234 "state": "configuring", 00:13:36.234 "raid_level": "raid1", 00:13:36.234 "superblock": false, 00:13:36.234 "num_base_bdevs": 2, 00:13:36.234 "num_base_bdevs_discovered": 1, 00:13:36.234 "num_base_bdevs_operational": 2, 00:13:36.234 "base_bdevs_list": [ 00:13:36.234 { 00:13:36.234 "name": "BaseBdev1", 00:13:36.234 "uuid": "b2776c7e-f7a7-4bf0-84a1-ddc2a7cc1c23", 00:13:36.234 "is_configured": true, 00:13:36.234 "data_offset": 0, 00:13:36.234 "data_size": 65536 00:13:36.234 }, 00:13:36.234 { 00:13:36.234 "name": "BaseBdev2", 00:13:36.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.234 "is_configured": false, 00:13:36.234 "data_offset": 0, 00:13:36.234 "data_size": 0 00:13:36.234 } 00:13:36.234 ] 00:13:36.234 }' 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.234 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.800 [2024-12-06 13:08:23.653435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:36.800 [2024-12-06 13:08:23.653511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.800 [2024-12-06 13:08:23.661450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.800 [2024-12-06 13:08:23.663943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:36.800 [2024-12-06 13:08:23.663995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.800 "name": "Existed_Raid", 00:13:36.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.800 "strip_size_kb": 0, 00:13:36.800 "state": "configuring", 00:13:36.800 "raid_level": "raid1", 00:13:36.800 "superblock": false, 00:13:36.800 "num_base_bdevs": 2, 00:13:36.800 "num_base_bdevs_discovered": 1, 00:13:36.800 "num_base_bdevs_operational": 2, 00:13:36.800 "base_bdevs_list": [ 00:13:36.800 { 00:13:36.800 "name": "BaseBdev1", 00:13:36.800 "uuid": "b2776c7e-f7a7-4bf0-84a1-ddc2a7cc1c23", 00:13:36.800 "is_configured": true, 00:13:36.800 "data_offset": 0, 00:13:36.800 "data_size": 65536 00:13:36.800 }, 00:13:36.800 { 00:13:36.800 "name": "BaseBdev2", 00:13:36.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.800 "is_configured": false, 00:13:36.800 "data_offset": 0, 00:13:36.800 "data_size": 0 00:13:36.800 } 00:13:36.800 ] 00:13:36.800 }' 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.800 13:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.368 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:37.368 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.368 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.368 [2024-12-06 13:08:24.192317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:37.368 [2024-12-06 13:08:24.192402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:37.368 [2024-12-06 13:08:24.192415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:37.368 [2024-12-06 13:08:24.192773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:37.368 [2024-12-06 13:08:24.193007] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:37.368 [2024-12-06 13:08:24.193030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:37.368 [2024-12-06 13:08:24.193338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.368 BaseBdev2 00:13:37.368 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.368 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:37.368 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:37.368 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:37.368 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:37.368 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:37.368 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:37.368 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.369 [ 00:13:37.369 { 00:13:37.369 "name": "BaseBdev2", 00:13:37.369 "aliases": [ 00:13:37.369 "16df5be0-e580-4b60-a480-ba5282330764" 00:13:37.369 ], 00:13:37.369 "product_name": "Malloc disk", 00:13:37.369 "block_size": 512, 00:13:37.369 "num_blocks": 65536, 00:13:37.369 "uuid": "16df5be0-e580-4b60-a480-ba5282330764", 00:13:37.369 "assigned_rate_limits": { 00:13:37.369 "rw_ios_per_sec": 0, 00:13:37.369 "rw_mbytes_per_sec": 0, 00:13:37.369 "r_mbytes_per_sec": 0, 00:13:37.369 "w_mbytes_per_sec": 0 00:13:37.369 }, 00:13:37.369 "claimed": true, 00:13:37.369 "claim_type": "exclusive_write", 00:13:37.369 "zoned": false, 00:13:37.369 "supported_io_types": { 00:13:37.369 "read": true, 00:13:37.369 "write": true, 00:13:37.369 "unmap": true, 00:13:37.369 "flush": true, 00:13:37.369 "reset": true, 00:13:37.369 "nvme_admin": false, 00:13:37.369 "nvme_io": false, 00:13:37.369 "nvme_io_md": false, 00:13:37.369 "write_zeroes": true, 00:13:37.369 "zcopy": true, 00:13:37.369 "get_zone_info": false, 00:13:37.369 "zone_management": false, 00:13:37.369 "zone_append": false, 00:13:37.369 "compare": false, 00:13:37.369 "compare_and_write": false, 00:13:37.369 "abort": true, 00:13:37.369 "seek_hole": false, 00:13:37.369 "seek_data": false, 00:13:37.369 "copy": true, 00:13:37.369 "nvme_iov_md": false 00:13:37.369 }, 00:13:37.369 "memory_domains": [ 00:13:37.369 { 00:13:37.369 "dma_device_id": "system", 00:13:37.369 "dma_device_type": 1 00:13:37.369 }, 00:13:37.369 { 00:13:37.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.369 "dma_device_type": 2 00:13:37.369 } 00:13:37.369 ], 00:13:37.369 "driver_specific": {} 00:13:37.369 } 00:13:37.369 ] 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.369 "name": "Existed_Raid", 00:13:37.369 "uuid": "5d3f02ab-b918-4789-8898-21cf1f607872", 00:13:37.369 "strip_size_kb": 0, 00:13:37.369 "state": "online", 00:13:37.369 "raid_level": "raid1", 00:13:37.369 "superblock": false, 00:13:37.369 "num_base_bdevs": 2, 00:13:37.369 "num_base_bdevs_discovered": 2, 00:13:37.369 "num_base_bdevs_operational": 2, 00:13:37.369 "base_bdevs_list": [ 00:13:37.369 { 00:13:37.369 "name": "BaseBdev1", 00:13:37.369 "uuid": "b2776c7e-f7a7-4bf0-84a1-ddc2a7cc1c23", 00:13:37.369 "is_configured": true, 00:13:37.369 "data_offset": 0, 00:13:37.369 "data_size": 65536 00:13:37.369 }, 00:13:37.369 { 00:13:37.369 "name": "BaseBdev2", 00:13:37.369 "uuid": "16df5be0-e580-4b60-a480-ba5282330764", 00:13:37.369 "is_configured": true, 00:13:37.369 "data_offset": 0, 00:13:37.369 "data_size": 65536 00:13:37.369 } 00:13:37.369 ] 00:13:37.369 }' 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.369 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.936 [2024-12-06 13:08:24.736884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.936 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.936 "name": "Existed_Raid", 00:13:37.936 "aliases": [ 00:13:37.936 "5d3f02ab-b918-4789-8898-21cf1f607872" 00:13:37.936 ], 00:13:37.936 "product_name": "Raid Volume", 00:13:37.936 "block_size": 512, 00:13:37.936 "num_blocks": 65536, 00:13:37.936 "uuid": "5d3f02ab-b918-4789-8898-21cf1f607872", 00:13:37.936 "assigned_rate_limits": { 00:13:37.936 "rw_ios_per_sec": 0, 00:13:37.936 "rw_mbytes_per_sec": 0, 00:13:37.936 "r_mbytes_per_sec": 0, 00:13:37.936 "w_mbytes_per_sec": 0 00:13:37.936 }, 00:13:37.936 "claimed": false, 00:13:37.936 "zoned": false, 00:13:37.936 "supported_io_types": { 00:13:37.936 "read": true, 00:13:37.936 "write": true, 00:13:37.936 "unmap": false, 00:13:37.936 "flush": false, 00:13:37.936 "reset": true, 00:13:37.936 "nvme_admin": false, 00:13:37.937 "nvme_io": false, 00:13:37.937 "nvme_io_md": false, 00:13:37.937 "write_zeroes": true, 00:13:37.937 "zcopy": false, 00:13:37.937 "get_zone_info": false, 00:13:37.937 "zone_management": false, 00:13:37.937 "zone_append": false, 00:13:37.937 "compare": false, 00:13:37.937 "compare_and_write": false, 00:13:37.937 "abort": false, 00:13:37.937 "seek_hole": false, 00:13:37.937 "seek_data": false, 00:13:37.937 "copy": false, 00:13:37.937 "nvme_iov_md": false 00:13:37.937 }, 00:13:37.937 "memory_domains": [ 00:13:37.937 { 00:13:37.937 "dma_device_id": "system", 00:13:37.937 "dma_device_type": 1 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.937 "dma_device_type": 2 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "dma_device_id": "system", 00:13:37.937 "dma_device_type": 1 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.937 "dma_device_type": 2 00:13:37.937 } 00:13:37.937 ], 00:13:37.937 "driver_specific": { 00:13:37.937 "raid": { 00:13:37.937 "uuid": "5d3f02ab-b918-4789-8898-21cf1f607872", 00:13:37.937 "strip_size_kb": 0, 00:13:37.937 "state": "online", 00:13:37.937 "raid_level": "raid1", 00:13:37.937 "superblock": false, 00:13:37.937 "num_base_bdevs": 2, 00:13:37.937 "num_base_bdevs_discovered": 2, 00:13:37.937 "num_base_bdevs_operational": 2, 00:13:37.937 "base_bdevs_list": [ 00:13:37.937 { 00:13:37.937 "name": "BaseBdev1", 00:13:37.937 "uuid": "b2776c7e-f7a7-4bf0-84a1-ddc2a7cc1c23", 00:13:37.937 "is_configured": true, 00:13:37.937 "data_offset": 0, 00:13:37.937 "data_size": 65536 00:13:37.937 }, 00:13:37.937 { 00:13:37.937 "name": "BaseBdev2", 00:13:37.937 "uuid": "16df5be0-e580-4b60-a480-ba5282330764", 00:13:37.937 "is_configured": true, 00:13:37.937 "data_offset": 0, 00:13:37.937 "data_size": 65536 00:13:37.937 } 00:13:37.937 ] 00:13:37.937 } 00:13:37.937 } 00:13:37.937 }' 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:37.937 BaseBdev2' 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.937 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.196 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.196 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.196 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.196 13:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:38.196 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.196 13:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.196 [2024-12-06 13:08:24.996632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.196 "name": "Existed_Raid", 00:13:38.196 "uuid": "5d3f02ab-b918-4789-8898-21cf1f607872", 00:13:38.196 "strip_size_kb": 0, 00:13:38.196 "state": "online", 00:13:38.196 "raid_level": "raid1", 00:13:38.196 "superblock": false, 00:13:38.196 "num_base_bdevs": 2, 00:13:38.196 "num_base_bdevs_discovered": 1, 00:13:38.196 "num_base_bdevs_operational": 1, 00:13:38.196 "base_bdevs_list": [ 00:13:38.196 { 00:13:38.196 "name": null, 00:13:38.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.196 "is_configured": false, 00:13:38.196 "data_offset": 0, 00:13:38.196 "data_size": 65536 00:13:38.196 }, 00:13:38.196 { 00:13:38.196 "name": "BaseBdev2", 00:13:38.196 "uuid": "16df5be0-e580-4b60-a480-ba5282330764", 00:13:38.196 "is_configured": true, 00:13:38.196 "data_offset": 0, 00:13:38.196 "data_size": 65536 00:13:38.196 } 00:13:38.196 ] 00:13:38.196 }' 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.196 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.762 [2024-12-06 13:08:25.639887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:38.762 [2024-12-06 13:08:25.640015] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.762 [2024-12-06 13:08:25.724424] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.762 [2024-12-06 13:08:25.724532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.762 [2024-12-06 13:08:25.724555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.762 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62910 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62910 ']' 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62910 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62910 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.020 killing process with pid 62910 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62910' 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62910 00:13:39.020 [2024-12-06 13:08:25.815867] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.020 13:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62910 00:13:39.020 [2024-12-06 13:08:25.830529] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:39.955 00:13:39.955 real 0m5.500s 00:13:39.955 user 0m8.276s 00:13:39.955 sys 0m0.841s 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.955 ************************************ 00:13:39.955 END TEST raid_state_function_test 00:13:39.955 ************************************ 00:13:39.955 13:08:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:13:39.955 13:08:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:39.955 13:08:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.955 13:08:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.955 ************************************ 00:13:39.955 START TEST raid_state_function_test_sb 00:13:39.955 ************************************ 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63163 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63163' 00:13:39.955 Process raid pid: 63163 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63163 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63163 ']' 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.955 13:08:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.214 [2024-12-06 13:08:27.052760] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:40.214 [2024-12-06 13:08:27.052947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.473 [2024-12-06 13:08:27.244108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.473 [2024-12-06 13:08:27.405248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.732 [2024-12-06 13:08:27.620475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.732 [2024-12-06 13:08:27.620546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.299 [2024-12-06 13:08:28.077906] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:41.299 [2024-12-06 13:08:28.077988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:41.299 [2024-12-06 13:08:28.078006] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.299 [2024-12-06 13:08:28.078023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.299 "name": "Existed_Raid", 00:13:41.299 "uuid": "10a74197-5209-4445-80f7-1c83deaaed5d", 00:13:41.299 "strip_size_kb": 0, 00:13:41.299 "state": "configuring", 00:13:41.299 "raid_level": "raid1", 00:13:41.299 "superblock": true, 00:13:41.299 "num_base_bdevs": 2, 00:13:41.299 "num_base_bdevs_discovered": 0, 00:13:41.299 "num_base_bdevs_operational": 2, 00:13:41.299 "base_bdevs_list": [ 00:13:41.299 { 00:13:41.299 "name": "BaseBdev1", 00:13:41.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.299 "is_configured": false, 00:13:41.299 "data_offset": 0, 00:13:41.299 "data_size": 0 00:13:41.299 }, 00:13:41.299 { 00:13:41.299 "name": "BaseBdev2", 00:13:41.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.299 "is_configured": false, 00:13:41.299 "data_offset": 0, 00:13:41.299 "data_size": 0 00:13:41.299 } 00:13:41.299 ] 00:13:41.299 }' 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.299 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.866 [2024-12-06 13:08:28.593963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.866 [2024-12-06 13:08:28.594006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.866 [2024-12-06 13:08:28.601943] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:41.866 [2024-12-06 13:08:28.601990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:41.866 [2024-12-06 13:08:28.602022] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.866 [2024-12-06 13:08:28.602041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.866 [2024-12-06 13:08:28.647154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.866 BaseBdev1 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.866 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.866 [ 00:13:41.866 { 00:13:41.866 "name": "BaseBdev1", 00:13:41.866 "aliases": [ 00:13:41.866 "97c4611c-9a51-49e8-ae0f-ea3ac6d3cace" 00:13:41.866 ], 00:13:41.866 "product_name": "Malloc disk", 00:13:41.866 "block_size": 512, 00:13:41.866 "num_blocks": 65536, 00:13:41.866 "uuid": "97c4611c-9a51-49e8-ae0f-ea3ac6d3cace", 00:13:41.866 "assigned_rate_limits": { 00:13:41.866 "rw_ios_per_sec": 0, 00:13:41.866 "rw_mbytes_per_sec": 0, 00:13:41.866 "r_mbytes_per_sec": 0, 00:13:41.866 "w_mbytes_per_sec": 0 00:13:41.866 }, 00:13:41.866 "claimed": true, 00:13:41.866 "claim_type": "exclusive_write", 00:13:41.866 "zoned": false, 00:13:41.866 "supported_io_types": { 00:13:41.866 "read": true, 00:13:41.867 "write": true, 00:13:41.867 "unmap": true, 00:13:41.867 "flush": true, 00:13:41.867 "reset": true, 00:13:41.867 "nvme_admin": false, 00:13:41.867 "nvme_io": false, 00:13:41.867 "nvme_io_md": false, 00:13:41.867 "write_zeroes": true, 00:13:41.867 "zcopy": true, 00:13:41.867 "get_zone_info": false, 00:13:41.867 "zone_management": false, 00:13:41.867 "zone_append": false, 00:13:41.867 "compare": false, 00:13:41.867 "compare_and_write": false, 00:13:41.867 "abort": true, 00:13:41.867 "seek_hole": false, 00:13:41.867 "seek_data": false, 00:13:41.867 "copy": true, 00:13:41.867 "nvme_iov_md": false 00:13:41.867 }, 00:13:41.867 "memory_domains": [ 00:13:41.867 { 00:13:41.867 "dma_device_id": "system", 00:13:41.867 "dma_device_type": 1 00:13:41.867 }, 00:13:41.867 { 00:13:41.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.867 "dma_device_type": 2 00:13:41.867 } 00:13:41.867 ], 00:13:41.867 "driver_specific": {} 00:13:41.867 } 00:13:41.867 ] 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.867 "name": "Existed_Raid", 00:13:41.867 "uuid": "e9128def-e337-48ca-8abd-ef7f09dbc063", 00:13:41.867 "strip_size_kb": 0, 00:13:41.867 "state": "configuring", 00:13:41.867 "raid_level": "raid1", 00:13:41.867 "superblock": true, 00:13:41.867 "num_base_bdevs": 2, 00:13:41.867 "num_base_bdevs_discovered": 1, 00:13:41.867 "num_base_bdevs_operational": 2, 00:13:41.867 "base_bdevs_list": [ 00:13:41.867 { 00:13:41.867 "name": "BaseBdev1", 00:13:41.867 "uuid": "97c4611c-9a51-49e8-ae0f-ea3ac6d3cace", 00:13:41.867 "is_configured": true, 00:13:41.867 "data_offset": 2048, 00:13:41.867 "data_size": 63488 00:13:41.867 }, 00:13:41.867 { 00:13:41.867 "name": "BaseBdev2", 00:13:41.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.867 "is_configured": false, 00:13:41.867 "data_offset": 0, 00:13:41.867 "data_size": 0 00:13:41.867 } 00:13:41.867 ] 00:13:41.867 }' 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.867 13:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.433 [2024-12-06 13:08:29.179360] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:42.433 [2024-12-06 13:08:29.179441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.433 [2024-12-06 13:08:29.187402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.433 [2024-12-06 13:08:29.189985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:42.433 [2024-12-06 13:08:29.190052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.433 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.434 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.434 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.434 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.434 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.434 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.434 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.434 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.434 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.434 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.434 "name": "Existed_Raid", 00:13:42.434 "uuid": "2d64ab3c-d219-4822-9174-b829055c1c6b", 00:13:42.434 "strip_size_kb": 0, 00:13:42.434 "state": "configuring", 00:13:42.434 "raid_level": "raid1", 00:13:42.434 "superblock": true, 00:13:42.434 "num_base_bdevs": 2, 00:13:42.434 "num_base_bdevs_discovered": 1, 00:13:42.434 "num_base_bdevs_operational": 2, 00:13:42.434 "base_bdevs_list": [ 00:13:42.434 { 00:13:42.434 "name": "BaseBdev1", 00:13:42.434 "uuid": "97c4611c-9a51-49e8-ae0f-ea3ac6d3cace", 00:13:42.434 "is_configured": true, 00:13:42.434 "data_offset": 2048, 00:13:42.434 "data_size": 63488 00:13:42.434 }, 00:13:42.434 { 00:13:42.434 "name": "BaseBdev2", 00:13:42.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.434 "is_configured": false, 00:13:42.434 "data_offset": 0, 00:13:42.434 "data_size": 0 00:13:42.434 } 00:13:42.434 ] 00:13:42.434 }' 00:13:42.434 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.434 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.693 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:42.693 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.693 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.951 [2024-12-06 13:08:29.726625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.951 [2024-12-06 13:08:29.726960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:42.951 [2024-12-06 13:08:29.726979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.951 [2024-12-06 13:08:29.727343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:42.951 BaseBdev2 00:13:42.951 [2024-12-06 13:08:29.727601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:42.951 [2024-12-06 13:08:29.727633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:42.951 [2024-12-06 13:08:29.727848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.951 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.951 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:42.951 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:42.951 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.951 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:42.951 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.951 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.951 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.951 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.951 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.951 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.952 [ 00:13:42.952 { 00:13:42.952 "name": "BaseBdev2", 00:13:42.952 "aliases": [ 00:13:42.952 "75793dc8-625e-40ca-b25b-4abbf615946b" 00:13:42.952 ], 00:13:42.952 "product_name": "Malloc disk", 00:13:42.952 "block_size": 512, 00:13:42.952 "num_blocks": 65536, 00:13:42.952 "uuid": "75793dc8-625e-40ca-b25b-4abbf615946b", 00:13:42.952 "assigned_rate_limits": { 00:13:42.952 "rw_ios_per_sec": 0, 00:13:42.952 "rw_mbytes_per_sec": 0, 00:13:42.952 "r_mbytes_per_sec": 0, 00:13:42.952 "w_mbytes_per_sec": 0 00:13:42.952 }, 00:13:42.952 "claimed": true, 00:13:42.952 "claim_type": "exclusive_write", 00:13:42.952 "zoned": false, 00:13:42.952 "supported_io_types": { 00:13:42.952 "read": true, 00:13:42.952 "write": true, 00:13:42.952 "unmap": true, 00:13:42.952 "flush": true, 00:13:42.952 "reset": true, 00:13:42.952 "nvme_admin": false, 00:13:42.952 "nvme_io": false, 00:13:42.952 "nvme_io_md": false, 00:13:42.952 "write_zeroes": true, 00:13:42.952 "zcopy": true, 00:13:42.952 "get_zone_info": false, 00:13:42.952 "zone_management": false, 00:13:42.952 "zone_append": false, 00:13:42.952 "compare": false, 00:13:42.952 "compare_and_write": false, 00:13:42.952 "abort": true, 00:13:42.952 "seek_hole": false, 00:13:42.952 "seek_data": false, 00:13:42.952 "copy": true, 00:13:42.952 "nvme_iov_md": false 00:13:42.952 }, 00:13:42.952 "memory_domains": [ 00:13:42.952 { 00:13:42.952 "dma_device_id": "system", 00:13:42.952 "dma_device_type": 1 00:13:42.952 }, 00:13:42.952 { 00:13:42.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.952 "dma_device_type": 2 00:13:42.952 } 00:13:42.952 ], 00:13:42.952 "driver_specific": {} 00:13:42.952 } 00:13:42.952 ] 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.952 "name": "Existed_Raid", 00:13:42.952 "uuid": "2d64ab3c-d219-4822-9174-b829055c1c6b", 00:13:42.952 "strip_size_kb": 0, 00:13:42.952 "state": "online", 00:13:42.952 "raid_level": "raid1", 00:13:42.952 "superblock": true, 00:13:42.952 "num_base_bdevs": 2, 00:13:42.952 "num_base_bdevs_discovered": 2, 00:13:42.952 "num_base_bdevs_operational": 2, 00:13:42.952 "base_bdevs_list": [ 00:13:42.952 { 00:13:42.952 "name": "BaseBdev1", 00:13:42.952 "uuid": "97c4611c-9a51-49e8-ae0f-ea3ac6d3cace", 00:13:42.952 "is_configured": true, 00:13:42.952 "data_offset": 2048, 00:13:42.952 "data_size": 63488 00:13:42.952 }, 00:13:42.952 { 00:13:42.952 "name": "BaseBdev2", 00:13:42.952 "uuid": "75793dc8-625e-40ca-b25b-4abbf615946b", 00:13:42.952 "is_configured": true, 00:13:42.952 "data_offset": 2048, 00:13:42.952 "data_size": 63488 00:13:42.952 } 00:13:42.952 ] 00:13:42.952 }' 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.952 13:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.520 [2024-12-06 13:08:30.279159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:43.520 "name": "Existed_Raid", 00:13:43.520 "aliases": [ 00:13:43.520 "2d64ab3c-d219-4822-9174-b829055c1c6b" 00:13:43.520 ], 00:13:43.520 "product_name": "Raid Volume", 00:13:43.520 "block_size": 512, 00:13:43.520 "num_blocks": 63488, 00:13:43.520 "uuid": "2d64ab3c-d219-4822-9174-b829055c1c6b", 00:13:43.520 "assigned_rate_limits": { 00:13:43.520 "rw_ios_per_sec": 0, 00:13:43.520 "rw_mbytes_per_sec": 0, 00:13:43.520 "r_mbytes_per_sec": 0, 00:13:43.520 "w_mbytes_per_sec": 0 00:13:43.520 }, 00:13:43.520 "claimed": false, 00:13:43.520 "zoned": false, 00:13:43.520 "supported_io_types": { 00:13:43.520 "read": true, 00:13:43.520 "write": true, 00:13:43.520 "unmap": false, 00:13:43.520 "flush": false, 00:13:43.520 "reset": true, 00:13:43.520 "nvme_admin": false, 00:13:43.520 "nvme_io": false, 00:13:43.520 "nvme_io_md": false, 00:13:43.520 "write_zeroes": true, 00:13:43.520 "zcopy": false, 00:13:43.520 "get_zone_info": false, 00:13:43.520 "zone_management": false, 00:13:43.520 "zone_append": false, 00:13:43.520 "compare": false, 00:13:43.520 "compare_and_write": false, 00:13:43.520 "abort": false, 00:13:43.520 "seek_hole": false, 00:13:43.520 "seek_data": false, 00:13:43.520 "copy": false, 00:13:43.520 "nvme_iov_md": false 00:13:43.520 }, 00:13:43.520 "memory_domains": [ 00:13:43.520 { 00:13:43.520 "dma_device_id": "system", 00:13:43.520 "dma_device_type": 1 00:13:43.520 }, 00:13:43.520 { 00:13:43.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.520 "dma_device_type": 2 00:13:43.520 }, 00:13:43.520 { 00:13:43.520 "dma_device_id": "system", 00:13:43.520 "dma_device_type": 1 00:13:43.520 }, 00:13:43.520 { 00:13:43.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.520 "dma_device_type": 2 00:13:43.520 } 00:13:43.520 ], 00:13:43.520 "driver_specific": { 00:13:43.520 "raid": { 00:13:43.520 "uuid": "2d64ab3c-d219-4822-9174-b829055c1c6b", 00:13:43.520 "strip_size_kb": 0, 00:13:43.520 "state": "online", 00:13:43.520 "raid_level": "raid1", 00:13:43.520 "superblock": true, 00:13:43.520 "num_base_bdevs": 2, 00:13:43.520 "num_base_bdevs_discovered": 2, 00:13:43.520 "num_base_bdevs_operational": 2, 00:13:43.520 "base_bdevs_list": [ 00:13:43.520 { 00:13:43.520 "name": "BaseBdev1", 00:13:43.520 "uuid": "97c4611c-9a51-49e8-ae0f-ea3ac6d3cace", 00:13:43.520 "is_configured": true, 00:13:43.520 "data_offset": 2048, 00:13:43.520 "data_size": 63488 00:13:43.520 }, 00:13:43.520 { 00:13:43.520 "name": "BaseBdev2", 00:13:43.520 "uuid": "75793dc8-625e-40ca-b25b-4abbf615946b", 00:13:43.520 "is_configured": true, 00:13:43.520 "data_offset": 2048, 00:13:43.520 "data_size": 63488 00:13:43.520 } 00:13:43.520 ] 00:13:43.520 } 00:13:43.520 } 00:13:43.520 }' 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:43.520 BaseBdev2' 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.520 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.779 [2024-12-06 13:08:30.574933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.779 "name": "Existed_Raid", 00:13:43.779 "uuid": "2d64ab3c-d219-4822-9174-b829055c1c6b", 00:13:43.779 "strip_size_kb": 0, 00:13:43.779 "state": "online", 00:13:43.779 "raid_level": "raid1", 00:13:43.779 "superblock": true, 00:13:43.779 "num_base_bdevs": 2, 00:13:43.779 "num_base_bdevs_discovered": 1, 00:13:43.779 "num_base_bdevs_operational": 1, 00:13:43.779 "base_bdevs_list": [ 00:13:43.779 { 00:13:43.779 "name": null, 00:13:43.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.779 "is_configured": false, 00:13:43.779 "data_offset": 0, 00:13:43.779 "data_size": 63488 00:13:43.779 }, 00:13:43.779 { 00:13:43.779 "name": "BaseBdev2", 00:13:43.779 "uuid": "75793dc8-625e-40ca-b25b-4abbf615946b", 00:13:43.779 "is_configured": true, 00:13:43.779 "data_offset": 2048, 00:13:43.779 "data_size": 63488 00:13:43.779 } 00:13:43.779 ] 00:13:43.779 }' 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.779 13:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.344 [2024-12-06 13:08:31.209662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:44.344 [2024-12-06 13:08:31.209802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.344 [2024-12-06 13:08:31.298509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.344 [2024-12-06 13:08:31.298585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.344 [2024-12-06 13:08:31.298607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63163 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63163 ']' 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63163 00:13:44.344 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:44.601 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.601 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63163 00:13:44.601 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.602 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.602 killing process with pid 63163 00:13:44.602 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63163' 00:13:44.602 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63163 00:13:44.602 [2024-12-06 13:08:31.384675] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.602 13:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63163 00:13:44.602 [2024-12-06 13:08:31.399370] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.536 13:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:45.536 00:13:45.536 real 0m5.528s 00:13:45.536 user 0m8.385s 00:13:45.536 sys 0m0.758s 00:13:45.536 13:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.536 ************************************ 00:13:45.536 END TEST raid_state_function_test_sb 00:13:45.536 ************************************ 00:13:45.536 13:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.536 13:08:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:13:45.536 13:08:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:45.536 13:08:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.536 13:08:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.536 ************************************ 00:13:45.536 START TEST raid_superblock_test 00:13:45.536 ************************************ 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63415 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63415 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63415 ']' 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.536 13:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.537 13:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.537 13:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.537 13:08:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.795 [2024-12-06 13:08:32.658441] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:45.795 [2024-12-06 13:08:32.658637] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63415 ] 00:13:46.077 [2024-12-06 13:08:32.844944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.077 [2024-12-06 13:08:32.976315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.335 [2024-12-06 13:08:33.182804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.335 [2024-12-06 13:08:33.182917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.902 malloc1 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.902 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.902 [2024-12-06 13:08:33.727287] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:46.902 [2024-12-06 13:08:33.727360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.903 [2024-12-06 13:08:33.727392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:46.903 [2024-12-06 13:08:33.727407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.903 [2024-12-06 13:08:33.730225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.903 [2024-12-06 13:08:33.730273] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:46.903 pt1 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.903 malloc2 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.903 [2024-12-06 13:08:33.783843] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:46.903 [2024-12-06 13:08:33.783922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.903 [2024-12-06 13:08:33.783958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:46.903 [2024-12-06 13:08:33.783973] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.903 [2024-12-06 13:08:33.786839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.903 [2024-12-06 13:08:33.786883] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:46.903 pt2 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.903 [2024-12-06 13:08:33.795903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:46.903 [2024-12-06 13:08:33.798392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:46.903 [2024-12-06 13:08:33.798668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:46.903 [2024-12-06 13:08:33.798693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.903 [2024-12-06 13:08:33.799027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:46.903 [2024-12-06 13:08:33.799250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:46.903 [2024-12-06 13:08:33.799282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:46.903 [2024-12-06 13:08:33.799490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.903 "name": "raid_bdev1", 00:13:46.903 "uuid": "27b37c7a-6b38-4e75-9f61-3925ed825f70", 00:13:46.903 "strip_size_kb": 0, 00:13:46.903 "state": "online", 00:13:46.903 "raid_level": "raid1", 00:13:46.903 "superblock": true, 00:13:46.903 "num_base_bdevs": 2, 00:13:46.903 "num_base_bdevs_discovered": 2, 00:13:46.903 "num_base_bdevs_operational": 2, 00:13:46.903 "base_bdevs_list": [ 00:13:46.903 { 00:13:46.903 "name": "pt1", 00:13:46.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.903 "is_configured": true, 00:13:46.903 "data_offset": 2048, 00:13:46.903 "data_size": 63488 00:13:46.903 }, 00:13:46.903 { 00:13:46.903 "name": "pt2", 00:13:46.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.903 "is_configured": true, 00:13:46.903 "data_offset": 2048, 00:13:46.903 "data_size": 63488 00:13:46.903 } 00:13:46.903 ] 00:13:46.903 }' 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.903 13:08:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.470 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:47.470 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:47.470 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:47.470 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:47.470 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:47.470 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:47.470 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:47.470 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:47.470 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.470 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.471 [2024-12-06 13:08:34.312448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:47.471 "name": "raid_bdev1", 00:13:47.471 "aliases": [ 00:13:47.471 "27b37c7a-6b38-4e75-9f61-3925ed825f70" 00:13:47.471 ], 00:13:47.471 "product_name": "Raid Volume", 00:13:47.471 "block_size": 512, 00:13:47.471 "num_blocks": 63488, 00:13:47.471 "uuid": "27b37c7a-6b38-4e75-9f61-3925ed825f70", 00:13:47.471 "assigned_rate_limits": { 00:13:47.471 "rw_ios_per_sec": 0, 00:13:47.471 "rw_mbytes_per_sec": 0, 00:13:47.471 "r_mbytes_per_sec": 0, 00:13:47.471 "w_mbytes_per_sec": 0 00:13:47.471 }, 00:13:47.471 "claimed": false, 00:13:47.471 "zoned": false, 00:13:47.471 "supported_io_types": { 00:13:47.471 "read": true, 00:13:47.471 "write": true, 00:13:47.471 "unmap": false, 00:13:47.471 "flush": false, 00:13:47.471 "reset": true, 00:13:47.471 "nvme_admin": false, 00:13:47.471 "nvme_io": false, 00:13:47.471 "nvme_io_md": false, 00:13:47.471 "write_zeroes": true, 00:13:47.471 "zcopy": false, 00:13:47.471 "get_zone_info": false, 00:13:47.471 "zone_management": false, 00:13:47.471 "zone_append": false, 00:13:47.471 "compare": false, 00:13:47.471 "compare_and_write": false, 00:13:47.471 "abort": false, 00:13:47.471 "seek_hole": false, 00:13:47.471 "seek_data": false, 00:13:47.471 "copy": false, 00:13:47.471 "nvme_iov_md": false 00:13:47.471 }, 00:13:47.471 "memory_domains": [ 00:13:47.471 { 00:13:47.471 "dma_device_id": "system", 00:13:47.471 "dma_device_type": 1 00:13:47.471 }, 00:13:47.471 { 00:13:47.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.471 "dma_device_type": 2 00:13:47.471 }, 00:13:47.471 { 00:13:47.471 "dma_device_id": "system", 00:13:47.471 "dma_device_type": 1 00:13:47.471 }, 00:13:47.471 { 00:13:47.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.471 "dma_device_type": 2 00:13:47.471 } 00:13:47.471 ], 00:13:47.471 "driver_specific": { 00:13:47.471 "raid": { 00:13:47.471 "uuid": "27b37c7a-6b38-4e75-9f61-3925ed825f70", 00:13:47.471 "strip_size_kb": 0, 00:13:47.471 "state": "online", 00:13:47.471 "raid_level": "raid1", 00:13:47.471 "superblock": true, 00:13:47.471 "num_base_bdevs": 2, 00:13:47.471 "num_base_bdevs_discovered": 2, 00:13:47.471 "num_base_bdevs_operational": 2, 00:13:47.471 "base_bdevs_list": [ 00:13:47.471 { 00:13:47.471 "name": "pt1", 00:13:47.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:47.471 "is_configured": true, 00:13:47.471 "data_offset": 2048, 00:13:47.471 "data_size": 63488 00:13:47.471 }, 00:13:47.471 { 00:13:47.471 "name": "pt2", 00:13:47.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.471 "is_configured": true, 00:13:47.471 "data_offset": 2048, 00:13:47.471 "data_size": 63488 00:13:47.471 } 00:13:47.471 ] 00:13:47.471 } 00:13:47.471 } 00:13:47.471 }' 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:47.471 pt2' 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.471 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.731 [2024-12-06 13:08:34.564554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=27b37c7a-6b38-4e75-9f61-3925ed825f70 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 27b37c7a-6b38-4e75-9f61-3925ed825f70 ']' 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.731 [2024-12-06 13:08:34.612116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.731 [2024-12-06 13:08:34.612151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.731 [2024-12-06 13:08:34.612271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.731 [2024-12-06 13:08:34.612372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.731 [2024-12-06 13:08:34.612393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.731 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.990 [2024-12-06 13:08:34.756311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:47.990 [2024-12-06 13:08:34.759230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:47.990 [2024-12-06 13:08:34.759335] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:47.990 [2024-12-06 13:08:34.759443] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:47.990 [2024-12-06 13:08:34.759505] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.990 [2024-12-06 13:08:34.759546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:47.990 request: 00:13:47.990 { 00:13:47.990 "name": "raid_bdev1", 00:13:47.990 "raid_level": "raid1", 00:13:47.990 "base_bdevs": [ 00:13:47.990 "malloc1", 00:13:47.990 "malloc2" 00:13:47.990 ], 00:13:47.990 "superblock": false, 00:13:47.990 "method": "bdev_raid_create", 00:13:47.990 "req_id": 1 00:13:47.990 } 00:13:47.990 Got JSON-RPC error response 00:13:47.990 response: 00:13:47.990 { 00:13:47.990 "code": -17, 00:13:47.990 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:47.990 } 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.990 [2024-12-06 13:08:34.820317] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:47.990 [2024-12-06 13:08:34.820431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.990 [2024-12-06 13:08:34.820472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:47.990 [2024-12-06 13:08:34.820538] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.990 [2024-12-06 13:08:34.824639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.990 [2024-12-06 13:08:34.824714] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:47.990 [2024-12-06 13:08:34.824850] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:47.990 [2024-12-06 13:08:34.824957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:47.990 pt1 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.990 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.991 "name": "raid_bdev1", 00:13:47.991 "uuid": "27b37c7a-6b38-4e75-9f61-3925ed825f70", 00:13:47.991 "strip_size_kb": 0, 00:13:47.991 "state": "configuring", 00:13:47.991 "raid_level": "raid1", 00:13:47.991 "superblock": true, 00:13:47.991 "num_base_bdevs": 2, 00:13:47.991 "num_base_bdevs_discovered": 1, 00:13:47.991 "num_base_bdevs_operational": 2, 00:13:47.991 "base_bdevs_list": [ 00:13:47.991 { 00:13:47.991 "name": "pt1", 00:13:47.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:47.991 "is_configured": true, 00:13:47.991 "data_offset": 2048, 00:13:47.991 "data_size": 63488 00:13:47.991 }, 00:13:47.991 { 00:13:47.991 "name": null, 00:13:47.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.991 "is_configured": false, 00:13:47.991 "data_offset": 2048, 00:13:47.991 "data_size": 63488 00:13:47.991 } 00:13:47.991 ] 00:13:47.991 }' 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.991 13:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.560 [2024-12-06 13:08:35.345223] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:48.560 [2024-12-06 13:08:35.345358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.560 [2024-12-06 13:08:35.345406] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:48.560 [2024-12-06 13:08:35.345434] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.560 [2024-12-06 13:08:35.346303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.560 [2024-12-06 13:08:35.346353] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:48.560 [2024-12-06 13:08:35.346608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:48.560 [2024-12-06 13:08:35.346665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:48.560 [2024-12-06 13:08:35.346835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:48.560 [2024-12-06 13:08:35.346867] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:48.560 [2024-12-06 13:08:35.347221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:48.560 [2024-12-06 13:08:35.347435] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:48.560 [2024-12-06 13:08:35.347461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:48.560 [2024-12-06 13:08:35.347677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.560 pt2 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.560 "name": "raid_bdev1", 00:13:48.560 "uuid": "27b37c7a-6b38-4e75-9f61-3925ed825f70", 00:13:48.560 "strip_size_kb": 0, 00:13:48.560 "state": "online", 00:13:48.560 "raid_level": "raid1", 00:13:48.560 "superblock": true, 00:13:48.560 "num_base_bdevs": 2, 00:13:48.560 "num_base_bdevs_discovered": 2, 00:13:48.560 "num_base_bdevs_operational": 2, 00:13:48.560 "base_bdevs_list": [ 00:13:48.560 { 00:13:48.560 "name": "pt1", 00:13:48.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:48.560 "is_configured": true, 00:13:48.560 "data_offset": 2048, 00:13:48.560 "data_size": 63488 00:13:48.560 }, 00:13:48.560 { 00:13:48.560 "name": "pt2", 00:13:48.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:48.560 "is_configured": true, 00:13:48.560 "data_offset": 2048, 00:13:48.560 "data_size": 63488 00:13:48.560 } 00:13:48.560 ] 00:13:48.560 }' 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.560 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.128 [2024-12-06 13:08:35.869764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:49.128 "name": "raid_bdev1", 00:13:49.128 "aliases": [ 00:13:49.128 "27b37c7a-6b38-4e75-9f61-3925ed825f70" 00:13:49.128 ], 00:13:49.128 "product_name": "Raid Volume", 00:13:49.128 "block_size": 512, 00:13:49.128 "num_blocks": 63488, 00:13:49.128 "uuid": "27b37c7a-6b38-4e75-9f61-3925ed825f70", 00:13:49.128 "assigned_rate_limits": { 00:13:49.128 "rw_ios_per_sec": 0, 00:13:49.128 "rw_mbytes_per_sec": 0, 00:13:49.128 "r_mbytes_per_sec": 0, 00:13:49.128 "w_mbytes_per_sec": 0 00:13:49.128 }, 00:13:49.128 "claimed": false, 00:13:49.128 "zoned": false, 00:13:49.128 "supported_io_types": { 00:13:49.128 "read": true, 00:13:49.128 "write": true, 00:13:49.128 "unmap": false, 00:13:49.128 "flush": false, 00:13:49.128 "reset": true, 00:13:49.128 "nvme_admin": false, 00:13:49.128 "nvme_io": false, 00:13:49.128 "nvme_io_md": false, 00:13:49.128 "write_zeroes": true, 00:13:49.128 "zcopy": false, 00:13:49.128 "get_zone_info": false, 00:13:49.128 "zone_management": false, 00:13:49.128 "zone_append": false, 00:13:49.128 "compare": false, 00:13:49.128 "compare_and_write": false, 00:13:49.128 "abort": false, 00:13:49.128 "seek_hole": false, 00:13:49.128 "seek_data": false, 00:13:49.128 "copy": false, 00:13:49.128 "nvme_iov_md": false 00:13:49.128 }, 00:13:49.128 "memory_domains": [ 00:13:49.128 { 00:13:49.128 "dma_device_id": "system", 00:13:49.128 "dma_device_type": 1 00:13:49.128 }, 00:13:49.128 { 00:13:49.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.128 "dma_device_type": 2 00:13:49.128 }, 00:13:49.128 { 00:13:49.128 "dma_device_id": "system", 00:13:49.128 "dma_device_type": 1 00:13:49.128 }, 00:13:49.128 { 00:13:49.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.128 "dma_device_type": 2 00:13:49.128 } 00:13:49.128 ], 00:13:49.128 "driver_specific": { 00:13:49.128 "raid": { 00:13:49.128 "uuid": "27b37c7a-6b38-4e75-9f61-3925ed825f70", 00:13:49.128 "strip_size_kb": 0, 00:13:49.128 "state": "online", 00:13:49.128 "raid_level": "raid1", 00:13:49.128 "superblock": true, 00:13:49.128 "num_base_bdevs": 2, 00:13:49.128 "num_base_bdevs_discovered": 2, 00:13:49.128 "num_base_bdevs_operational": 2, 00:13:49.128 "base_bdevs_list": [ 00:13:49.128 { 00:13:49.128 "name": "pt1", 00:13:49.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:49.128 "is_configured": true, 00:13:49.128 "data_offset": 2048, 00:13:49.128 "data_size": 63488 00:13:49.128 }, 00:13:49.128 { 00:13:49.128 "name": "pt2", 00:13:49.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:49.128 "is_configured": true, 00:13:49.128 "data_offset": 2048, 00:13:49.128 "data_size": 63488 00:13:49.128 } 00:13:49.128 ] 00:13:49.128 } 00:13:49.128 } 00:13:49.128 }' 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:49.128 pt2' 00:13:49.128 13:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:49.128 [2024-12-06 13:08:36.113725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.128 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 27b37c7a-6b38-4e75-9f61-3925ed825f70 '!=' 27b37c7a-6b38-4e75-9f61-3925ed825f70 ']' 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.387 [2024-12-06 13:08:36.157496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.387 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.388 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.388 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.388 "name": "raid_bdev1", 00:13:49.388 "uuid": "27b37c7a-6b38-4e75-9f61-3925ed825f70", 00:13:49.388 "strip_size_kb": 0, 00:13:49.388 "state": "online", 00:13:49.388 "raid_level": "raid1", 00:13:49.388 "superblock": true, 00:13:49.388 "num_base_bdevs": 2, 00:13:49.388 "num_base_bdevs_discovered": 1, 00:13:49.388 "num_base_bdevs_operational": 1, 00:13:49.388 "base_bdevs_list": [ 00:13:49.388 { 00:13:49.388 "name": null, 00:13:49.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.388 "is_configured": false, 00:13:49.388 "data_offset": 0, 00:13:49.388 "data_size": 63488 00:13:49.388 }, 00:13:49.388 { 00:13:49.388 "name": "pt2", 00:13:49.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:49.388 "is_configured": true, 00:13:49.388 "data_offset": 2048, 00:13:49.388 "data_size": 63488 00:13:49.388 } 00:13:49.388 ] 00:13:49.388 }' 00:13:49.388 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.388 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.955 [2024-12-06 13:08:36.669684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.955 [2024-12-06 13:08:36.669731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.955 [2024-12-06 13:08:36.669863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.955 [2024-12-06 13:08:36.669940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.955 [2024-12-06 13:08:36.669962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:49.955 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.956 [2024-12-06 13:08:36.745746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:49.956 [2024-12-06 13:08:36.745829] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.956 [2024-12-06 13:08:36.745856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:49.956 [2024-12-06 13:08:36.745874] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.956 [2024-12-06 13:08:36.749185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.956 [2024-12-06 13:08:36.749266] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:49.956 [2024-12-06 13:08:36.749419] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:49.956 [2024-12-06 13:08:36.749488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:49.956 [2024-12-06 13:08:36.749657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:49.956 [2024-12-06 13:08:36.749688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:49.956 [2024-12-06 13:08:36.749990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:49.956 [2024-12-06 13:08:36.750233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:49.956 [2024-12-06 13:08:36.750256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:49.956 pt2 00:13:49.956 [2024-12-06 13:08:36.750529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.956 "name": "raid_bdev1", 00:13:49.956 "uuid": "27b37c7a-6b38-4e75-9f61-3925ed825f70", 00:13:49.956 "strip_size_kb": 0, 00:13:49.956 "state": "online", 00:13:49.956 "raid_level": "raid1", 00:13:49.956 "superblock": true, 00:13:49.956 "num_base_bdevs": 2, 00:13:49.956 "num_base_bdevs_discovered": 1, 00:13:49.956 "num_base_bdevs_operational": 1, 00:13:49.956 "base_bdevs_list": [ 00:13:49.956 { 00:13:49.956 "name": null, 00:13:49.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.956 "is_configured": false, 00:13:49.956 "data_offset": 2048, 00:13:49.956 "data_size": 63488 00:13:49.956 }, 00:13:49.956 { 00:13:49.956 "name": "pt2", 00:13:49.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:49.956 "is_configured": true, 00:13:49.956 "data_offset": 2048, 00:13:49.956 "data_size": 63488 00:13:49.956 } 00:13:49.956 ] 00:13:49.956 }' 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.956 13:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.525 [2024-12-06 13:08:37.245961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:50.525 [2024-12-06 13:08:37.246006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.525 [2024-12-06 13:08:37.246117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.525 [2024-12-06 13:08:37.246197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.525 [2024-12-06 13:08:37.246213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.525 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.526 [2024-12-06 13:08:37.309981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:50.526 [2024-12-06 13:08:37.310080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.526 [2024-12-06 13:08:37.310121] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:50.526 [2024-12-06 13:08:37.310138] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.526 [2024-12-06 13:08:37.313615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.526 [2024-12-06 13:08:37.313657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:50.526 [2024-12-06 13:08:37.313793] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:50.526 [2024-12-06 13:08:37.313897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:50.526 [2024-12-06 13:08:37.314095] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:50.526 [2024-12-06 13:08:37.314122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:50.526 [2024-12-06 13:08:37.314147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:50.526 [2024-12-06 13:08:37.314227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:50.526 [2024-12-06 13:08:37.314385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:50.526 [2024-12-06 13:08:37.314411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.526 pt1 00:13:50.526 [2024-12-06 13:08:37.314832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:50.526 [2024-12-06 13:08:37.315063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:50.526 [2024-12-06 13:08:37.315091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:50.526 [2024-12-06 13:08:37.315343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.526 "name": "raid_bdev1", 00:13:50.526 "uuid": "27b37c7a-6b38-4e75-9f61-3925ed825f70", 00:13:50.526 "strip_size_kb": 0, 00:13:50.526 "state": "online", 00:13:50.526 "raid_level": "raid1", 00:13:50.526 "superblock": true, 00:13:50.526 "num_base_bdevs": 2, 00:13:50.526 "num_base_bdevs_discovered": 1, 00:13:50.526 "num_base_bdevs_operational": 1, 00:13:50.526 "base_bdevs_list": [ 00:13:50.526 { 00:13:50.526 "name": null, 00:13:50.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.526 "is_configured": false, 00:13:50.526 "data_offset": 2048, 00:13:50.526 "data_size": 63488 00:13:50.526 }, 00:13:50.526 { 00:13:50.526 "name": "pt2", 00:13:50.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:50.526 "is_configured": true, 00:13:50.526 "data_offset": 2048, 00:13:50.526 "data_size": 63488 00:13:50.526 } 00:13:50.526 ] 00:13:50.526 }' 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.526 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.095 [2024-12-06 13:08:37.890889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 27b37c7a-6b38-4e75-9f61-3925ed825f70 '!=' 27b37c7a-6b38-4e75-9f61-3925ed825f70 ']' 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63415 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63415 ']' 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63415 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63415 00:13:51.095 killing process with pid 63415 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63415' 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63415 00:13:51.095 [2024-12-06 13:08:37.958182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.095 13:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63415 00:13:51.095 [2024-12-06 13:08:37.958304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.095 [2024-12-06 13:08:37.958406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.095 [2024-12-06 13:08:37.958434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:51.354 [2024-12-06 13:08:38.175955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.288 13:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:52.288 00:13:52.288 real 0m6.738s 00:13:52.288 user 0m10.594s 00:13:52.288 sys 0m0.990s 00:13:52.288 13:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.288 ************************************ 00:13:52.288 END TEST raid_superblock_test 00:13:52.288 ************************************ 00:13:52.288 13:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.546 13:08:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:13:52.546 13:08:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:52.546 13:08:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.546 13:08:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.546 ************************************ 00:13:52.546 START TEST raid_read_error_test 00:13:52.546 ************************************ 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.q8yaStk9Hv 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63756 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63756 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63756 ']' 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.546 13:08:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.546 [2024-12-06 13:08:39.447288] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:52.547 [2024-12-06 13:08:39.447544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63756 ] 00:13:52.805 [2024-12-06 13:08:39.642850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.805 [2024-12-06 13:08:39.818533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.063 [2024-12-06 13:08:40.049608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.064 [2024-12-06 13:08:40.049700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 BaseBdev1_malloc 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 true 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 [2024-12-06 13:08:40.486292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:53.630 [2024-12-06 13:08:40.486383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.630 [2024-12-06 13:08:40.486411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:53.630 [2024-12-06 13:08:40.486428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.630 [2024-12-06 13:08:40.489469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.630 [2024-12-06 13:08:40.489548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.630 BaseBdev1 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 BaseBdev2_malloc 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 true 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.630 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 [2024-12-06 13:08:40.548769] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:53.630 [2024-12-06 13:08:40.549059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.630 [2024-12-06 13:08:40.549128] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:53.630 [2024-12-06 13:08:40.549249] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.631 [2024-12-06 13:08:40.552451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.631 [2024-12-06 13:08:40.552695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:53.631 BaseBdev2 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.631 [2024-12-06 13:08:40.561004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.631 [2024-12-06 13:08:40.563618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.631 [2024-12-06 13:08:40.563879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:53.631 [2024-12-06 13:08:40.563901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:53.631 [2024-12-06 13:08:40.564163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:53.631 [2024-12-06 13:08:40.564379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:53.631 [2024-12-06 13:08:40.564394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:53.631 [2024-12-06 13:08:40.564596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.631 "name": "raid_bdev1", 00:13:53.631 "uuid": "d353e97d-7bbf-419b-8598-4b0f3190b084", 00:13:53.631 "strip_size_kb": 0, 00:13:53.631 "state": "online", 00:13:53.631 "raid_level": "raid1", 00:13:53.631 "superblock": true, 00:13:53.631 "num_base_bdevs": 2, 00:13:53.631 "num_base_bdevs_discovered": 2, 00:13:53.631 "num_base_bdevs_operational": 2, 00:13:53.631 "base_bdevs_list": [ 00:13:53.631 { 00:13:53.631 "name": "BaseBdev1", 00:13:53.631 "uuid": "3e9856f4-5165-57a1-830e-74ea8141c79c", 00:13:53.631 "is_configured": true, 00:13:53.631 "data_offset": 2048, 00:13:53.631 "data_size": 63488 00:13:53.631 }, 00:13:53.631 { 00:13:53.631 "name": "BaseBdev2", 00:13:53.631 "uuid": "0a509ad0-6203-5c8c-b8ab-406fd6368ffd", 00:13:53.631 "is_configured": true, 00:13:53.631 "data_offset": 2048, 00:13:53.631 "data_size": 63488 00:13:53.631 } 00:13:53.631 ] 00:13:53.631 }' 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.631 13:08:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.196 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:54.196 13:08:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:54.196 [2024-12-06 13:08:41.206962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:55.130 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:55.130 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.130 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.130 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.130 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:55.130 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:55.130 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:55.130 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.131 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.389 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.389 "name": "raid_bdev1", 00:13:55.389 "uuid": "d353e97d-7bbf-419b-8598-4b0f3190b084", 00:13:55.389 "strip_size_kb": 0, 00:13:55.389 "state": "online", 00:13:55.389 "raid_level": "raid1", 00:13:55.389 "superblock": true, 00:13:55.389 "num_base_bdevs": 2, 00:13:55.389 "num_base_bdevs_discovered": 2, 00:13:55.389 "num_base_bdevs_operational": 2, 00:13:55.389 "base_bdevs_list": [ 00:13:55.389 { 00:13:55.389 "name": "BaseBdev1", 00:13:55.389 "uuid": "3e9856f4-5165-57a1-830e-74ea8141c79c", 00:13:55.389 "is_configured": true, 00:13:55.389 "data_offset": 2048, 00:13:55.389 "data_size": 63488 00:13:55.389 }, 00:13:55.389 { 00:13:55.389 "name": "BaseBdev2", 00:13:55.389 "uuid": "0a509ad0-6203-5c8c-b8ab-406fd6368ffd", 00:13:55.389 "is_configured": true, 00:13:55.389 "data_offset": 2048, 00:13:55.389 "data_size": 63488 00:13:55.389 } 00:13:55.389 ] 00:13:55.389 }' 00:13:55.389 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.389 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.648 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:55.648 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.648 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.648 [2024-12-06 13:08:42.625402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:55.648 [2024-12-06 13:08:42.625447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.648 { 00:13:55.648 "results": [ 00:13:55.648 { 00:13:55.648 "job": "raid_bdev1", 00:13:55.648 "core_mask": "0x1", 00:13:55.648 "workload": "randrw", 00:13:55.648 "percentage": 50, 00:13:55.648 "status": "finished", 00:13:55.648 "queue_depth": 1, 00:13:55.648 "io_size": 131072, 00:13:55.648 "runtime": 1.416011, 00:13:55.648 "iops": 10909.519770679748, 00:13:55.648 "mibps": 1363.6899713349685, 00:13:55.648 "io_failed": 0, 00:13:55.648 "io_timeout": 0, 00:13:55.648 "avg_latency_us": 87.41966574078434, 00:13:55.648 "min_latency_us": 39.79636363636364, 00:13:55.648 "max_latency_us": 1824.581818181818 00:13:55.648 } 00:13:55.648 ], 00:13:55.648 "core_count": 1 00:13:55.648 } 00:13:55.648 [2024-12-06 13:08:42.629389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.648 [2024-12-06 13:08:42.629541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.648 [2024-12-06 13:08:42.629667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.648 [2024-12-06 13:08:42.629690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:55.648 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.648 13:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63756 00:13:55.648 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63756 ']' 00:13:55.648 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63756 00:13:55.648 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:55.648 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.648 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63756 00:13:55.906 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.906 killing process with pid 63756 00:13:55.906 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.906 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63756' 00:13:55.906 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63756 00:13:55.906 13:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63756 00:13:55.906 [2024-12-06 13:08:42.674006] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.906 [2024-12-06 13:08:42.797073] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.277 13:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:57.277 13:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.q8yaStk9Hv 00:13:57.277 13:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:57.277 13:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:57.277 13:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:57.277 13:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:57.277 13:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:57.277 13:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:57.277 00:13:57.277 real 0m4.661s 00:13:57.277 user 0m5.766s 00:13:57.277 sys 0m0.641s 00:13:57.277 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.277 ************************************ 00:13:57.277 END TEST raid_read_error_test 00:13:57.277 ************************************ 00:13:57.277 13:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.277 13:08:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:13:57.277 13:08:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:57.277 13:08:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.277 13:08:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.277 ************************************ 00:13:57.277 START TEST raid_write_error_test 00:13:57.277 ************************************ 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jkwlgZ7NZN 00:13:57.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63902 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63902 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63902 ']' 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.277 13:08:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.277 [2024-12-06 13:08:44.148315] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:13:57.277 [2024-12-06 13:08:44.148502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63902 ] 00:13:57.535 [2024-12-06 13:08:44.324799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.535 [2024-12-06 13:08:44.475117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.793 [2024-12-06 13:08:44.701271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.793 [2024-12-06 13:08:44.701628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.361 BaseBdev1_malloc 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.361 true 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.361 [2024-12-06 13:08:45.250828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:58.361 [2024-12-06 13:08:45.250968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.361 [2024-12-06 13:08:45.250998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:58.361 [2024-12-06 13:08:45.251015] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.361 [2024-12-06 13:08:45.254250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.361 [2024-12-06 13:08:45.254311] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:58.361 BaseBdev1 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.361 BaseBdev2_malloc 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.361 true 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.361 [2024-12-06 13:08:45.318648] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:58.361 [2024-12-06 13:08:45.318740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.361 [2024-12-06 13:08:45.318769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:58.361 [2024-12-06 13:08:45.318787] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.361 [2024-12-06 13:08:45.322051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.361 [2024-12-06 13:08:45.322100] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:58.361 BaseBdev2 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.361 [2024-12-06 13:08:45.330870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.361 [2024-12-06 13:08:45.333660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.361 [2024-12-06 13:08:45.334010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:58.361 [2024-12-06 13:08:45.334036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:58.361 [2024-12-06 13:08:45.334429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:58.361 [2024-12-06 13:08:45.334755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:58.361 [2024-12-06 13:08:45.334774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:58.361 [2024-12-06 13:08:45.335024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.361 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.362 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.620 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.621 "name": "raid_bdev1", 00:13:58.621 "uuid": "dfea4b6f-f30e-48c5-9e1e-df223a199091", 00:13:58.621 "strip_size_kb": 0, 00:13:58.621 "state": "online", 00:13:58.621 "raid_level": "raid1", 00:13:58.621 "superblock": true, 00:13:58.621 "num_base_bdevs": 2, 00:13:58.621 "num_base_bdevs_discovered": 2, 00:13:58.621 "num_base_bdevs_operational": 2, 00:13:58.621 "base_bdevs_list": [ 00:13:58.621 { 00:13:58.621 "name": "BaseBdev1", 00:13:58.621 "uuid": "fea26bf6-f65b-5296-b3cb-9d29cad0d222", 00:13:58.621 "is_configured": true, 00:13:58.621 "data_offset": 2048, 00:13:58.621 "data_size": 63488 00:13:58.621 }, 00:13:58.621 { 00:13:58.621 "name": "BaseBdev2", 00:13:58.621 "uuid": "693a6188-5a86-5a2f-82f7-812ecfeec720", 00:13:58.621 "is_configured": true, 00:13:58.621 "data_offset": 2048, 00:13:58.621 "data_size": 63488 00:13:58.621 } 00:13:58.621 ] 00:13:58.621 }' 00:13:58.621 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.621 13:08:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.879 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:58.879 13:08:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:59.138 [2024-12-06 13:08:45.972770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:00.070 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:00.070 13:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.070 13:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.070 [2024-12-06 13:08:46.872920] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:00.070 [2024-12-06 13:08:46.873035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.070 [2024-12-06 13:08:46.873306] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:14:00.070 13:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.071 "name": "raid_bdev1", 00:14:00.071 "uuid": "dfea4b6f-f30e-48c5-9e1e-df223a199091", 00:14:00.071 "strip_size_kb": 0, 00:14:00.071 "state": "online", 00:14:00.071 "raid_level": "raid1", 00:14:00.071 "superblock": true, 00:14:00.071 "num_base_bdevs": 2, 00:14:00.071 "num_base_bdevs_discovered": 1, 00:14:00.071 "num_base_bdevs_operational": 1, 00:14:00.071 "base_bdevs_list": [ 00:14:00.071 { 00:14:00.071 "name": null, 00:14:00.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.071 "is_configured": false, 00:14:00.071 "data_offset": 0, 00:14:00.071 "data_size": 63488 00:14:00.071 }, 00:14:00.071 { 00:14:00.071 "name": "BaseBdev2", 00:14:00.071 "uuid": "693a6188-5a86-5a2f-82f7-812ecfeec720", 00:14:00.071 "is_configured": true, 00:14:00.071 "data_offset": 2048, 00:14:00.071 "data_size": 63488 00:14:00.071 } 00:14:00.071 ] 00:14:00.071 }' 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.071 13:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.638 [2024-12-06 13:08:47.409561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.638 [2024-12-06 13:08:47.409624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.638 [2024-12-06 13:08:47.413149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.638 [2024-12-06 13:08:47.413342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.638 [2024-12-06 13:08:47.413572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.638 [2024-12-06 13:08:47.413798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:00.638 { 00:14:00.638 "results": [ 00:14:00.638 { 00:14:00.638 "job": "raid_bdev1", 00:14:00.638 "core_mask": "0x1", 00:14:00.638 "workload": "randrw", 00:14:00.638 "percentage": 50, 00:14:00.638 "status": "finished", 00:14:00.638 "queue_depth": 1, 00:14:00.638 "io_size": 131072, 00:14:00.638 "runtime": 1.433969, 00:14:00.638 "iops": 12313.376370060998, 00:14:00.638 "mibps": 1539.1720462576247, 00:14:00.638 "io_failed": 0, 00:14:00.638 "io_timeout": 0, 00:14:00.638 "avg_latency_us": 76.40349899859443, 00:14:00.638 "min_latency_us": 40.261818181818185, 00:14:00.638 "max_latency_us": 2263.970909090909 00:14:00.638 } 00:14:00.638 ], 00:14:00.638 "core_count": 1 00:14:00.638 } 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63902 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63902 ']' 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63902 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63902 00:14:00.638 killing process with pid 63902 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63902' 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63902 00:14:00.638 [2024-12-06 13:08:47.456682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.638 13:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63902 00:14:00.638 [2024-12-06 13:08:47.576524] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:02.015 13:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jkwlgZ7NZN 00:14:02.015 13:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:02.015 13:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:02.015 13:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:02.015 13:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:02.015 ************************************ 00:14:02.015 END TEST raid_write_error_test 00:14:02.015 ************************************ 00:14:02.015 13:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:02.015 13:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:02.015 13:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:02.015 00:14:02.015 real 0m4.646s 00:14:02.015 user 0m5.755s 00:14:02.015 sys 0m0.661s 00:14:02.015 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.015 13:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.015 13:08:48 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:14:02.015 13:08:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:02.015 13:08:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:14:02.015 13:08:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:02.015 13:08:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.015 13:08:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:02.015 ************************************ 00:14:02.015 START TEST raid_state_function_test 00:14:02.015 ************************************ 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:02.015 Process raid pid: 64040 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64040 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64040' 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64040 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64040 ']' 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.015 13:08:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.015 [2024-12-06 13:08:48.846428] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:14:02.015 [2024-12-06 13:08:48.846611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.015 [2024-12-06 13:08:49.022953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.274 [2024-12-06 13:08:49.161430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.533 [2024-12-06 13:08:49.369455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.533 [2024-12-06 13:08:49.369522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.100 [2024-12-06 13:08:49.869207] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:03.100 [2024-12-06 13:08:49.869285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:03.100 [2024-12-06 13:08:49.869303] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:03.100 [2024-12-06 13:08:49.869320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:03.100 [2024-12-06 13:08:49.869331] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:03.100 [2024-12-06 13:08:49.869345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.100 "name": "Existed_Raid", 00:14:03.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.100 "strip_size_kb": 64, 00:14:03.100 "state": "configuring", 00:14:03.100 "raid_level": "raid0", 00:14:03.100 "superblock": false, 00:14:03.100 "num_base_bdevs": 3, 00:14:03.100 "num_base_bdevs_discovered": 0, 00:14:03.100 "num_base_bdevs_operational": 3, 00:14:03.100 "base_bdevs_list": [ 00:14:03.100 { 00:14:03.100 "name": "BaseBdev1", 00:14:03.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.100 "is_configured": false, 00:14:03.100 "data_offset": 0, 00:14:03.100 "data_size": 0 00:14:03.100 }, 00:14:03.100 { 00:14:03.100 "name": "BaseBdev2", 00:14:03.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.100 "is_configured": false, 00:14:03.100 "data_offset": 0, 00:14:03.100 "data_size": 0 00:14:03.100 }, 00:14:03.100 { 00:14:03.100 "name": "BaseBdev3", 00:14:03.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.100 "is_configured": false, 00:14:03.100 "data_offset": 0, 00:14:03.100 "data_size": 0 00:14:03.100 } 00:14:03.100 ] 00:14:03.100 }' 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.100 13:08:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.669 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:03.669 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.669 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.669 [2024-12-06 13:08:50.393283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.669 [2024-12-06 13:08:50.393330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:03.669 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.669 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:03.669 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.669 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.669 [2024-12-06 13:08:50.401269] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:03.669 [2024-12-06 13:08:50.401327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:03.669 [2024-12-06 13:08:50.401354] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:03.669 [2024-12-06 13:08:50.401371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:03.669 [2024-12-06 13:08:50.401380] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:03.669 [2024-12-06 13:08:50.401395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:03.669 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.670 [2024-12-06 13:08:50.446339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.670 BaseBdev1 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.670 [ 00:14:03.670 { 00:14:03.670 "name": "BaseBdev1", 00:14:03.670 "aliases": [ 00:14:03.670 "fa8f3b9f-0a99-4512-8493-296f66bbbc77" 00:14:03.670 ], 00:14:03.670 "product_name": "Malloc disk", 00:14:03.670 "block_size": 512, 00:14:03.670 "num_blocks": 65536, 00:14:03.670 "uuid": "fa8f3b9f-0a99-4512-8493-296f66bbbc77", 00:14:03.670 "assigned_rate_limits": { 00:14:03.670 "rw_ios_per_sec": 0, 00:14:03.670 "rw_mbytes_per_sec": 0, 00:14:03.670 "r_mbytes_per_sec": 0, 00:14:03.670 "w_mbytes_per_sec": 0 00:14:03.670 }, 00:14:03.670 "claimed": true, 00:14:03.670 "claim_type": "exclusive_write", 00:14:03.670 "zoned": false, 00:14:03.670 "supported_io_types": { 00:14:03.670 "read": true, 00:14:03.670 "write": true, 00:14:03.670 "unmap": true, 00:14:03.670 "flush": true, 00:14:03.670 "reset": true, 00:14:03.670 "nvme_admin": false, 00:14:03.670 "nvme_io": false, 00:14:03.670 "nvme_io_md": false, 00:14:03.670 "write_zeroes": true, 00:14:03.670 "zcopy": true, 00:14:03.670 "get_zone_info": false, 00:14:03.670 "zone_management": false, 00:14:03.670 "zone_append": false, 00:14:03.670 "compare": false, 00:14:03.670 "compare_and_write": false, 00:14:03.670 "abort": true, 00:14:03.670 "seek_hole": false, 00:14:03.670 "seek_data": false, 00:14:03.670 "copy": true, 00:14:03.670 "nvme_iov_md": false 00:14:03.670 }, 00:14:03.670 "memory_domains": [ 00:14:03.670 { 00:14:03.670 "dma_device_id": "system", 00:14:03.670 "dma_device_type": 1 00:14:03.670 }, 00:14:03.670 { 00:14:03.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.670 "dma_device_type": 2 00:14:03.670 } 00:14:03.670 ], 00:14:03.670 "driver_specific": {} 00:14:03.670 } 00:14:03.670 ] 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.670 "name": "Existed_Raid", 00:14:03.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.670 "strip_size_kb": 64, 00:14:03.670 "state": "configuring", 00:14:03.670 "raid_level": "raid0", 00:14:03.670 "superblock": false, 00:14:03.670 "num_base_bdevs": 3, 00:14:03.670 "num_base_bdevs_discovered": 1, 00:14:03.670 "num_base_bdevs_operational": 3, 00:14:03.670 "base_bdevs_list": [ 00:14:03.670 { 00:14:03.670 "name": "BaseBdev1", 00:14:03.670 "uuid": "fa8f3b9f-0a99-4512-8493-296f66bbbc77", 00:14:03.670 "is_configured": true, 00:14:03.670 "data_offset": 0, 00:14:03.670 "data_size": 65536 00:14:03.670 }, 00:14:03.670 { 00:14:03.670 "name": "BaseBdev2", 00:14:03.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.670 "is_configured": false, 00:14:03.670 "data_offset": 0, 00:14:03.670 "data_size": 0 00:14:03.670 }, 00:14:03.670 { 00:14:03.670 "name": "BaseBdev3", 00:14:03.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.670 "is_configured": false, 00:14:03.670 "data_offset": 0, 00:14:03.670 "data_size": 0 00:14:03.670 } 00:14:03.670 ] 00:14:03.670 }' 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.670 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.237 [2024-12-06 13:08:50.966560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.237 [2024-12-06 13:08:50.966769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.237 [2024-12-06 13:08:50.974615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.237 [2024-12-06 13:08:50.977032] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.237 [2024-12-06 13:08:50.977085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.237 [2024-12-06 13:08:50.977102] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:04.237 [2024-12-06 13:08:50.977118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.237 13:08:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.237 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.237 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.237 "name": "Existed_Raid", 00:14:04.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.237 "strip_size_kb": 64, 00:14:04.237 "state": "configuring", 00:14:04.237 "raid_level": "raid0", 00:14:04.237 "superblock": false, 00:14:04.237 "num_base_bdevs": 3, 00:14:04.237 "num_base_bdevs_discovered": 1, 00:14:04.237 "num_base_bdevs_operational": 3, 00:14:04.237 "base_bdevs_list": [ 00:14:04.237 { 00:14:04.237 "name": "BaseBdev1", 00:14:04.237 "uuid": "fa8f3b9f-0a99-4512-8493-296f66bbbc77", 00:14:04.237 "is_configured": true, 00:14:04.237 "data_offset": 0, 00:14:04.237 "data_size": 65536 00:14:04.237 }, 00:14:04.237 { 00:14:04.237 "name": "BaseBdev2", 00:14:04.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.237 "is_configured": false, 00:14:04.237 "data_offset": 0, 00:14:04.237 "data_size": 0 00:14:04.237 }, 00:14:04.237 { 00:14:04.237 "name": "BaseBdev3", 00:14:04.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.237 "is_configured": false, 00:14:04.237 "data_offset": 0, 00:14:04.237 "data_size": 0 00:14:04.237 } 00:14:04.237 ] 00:14:04.237 }' 00:14:04.237 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.237 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.495 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:04.495 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.495 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.753 [2024-12-06 13:08:51.532866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.753 BaseBdev2 00:14:04.753 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.753 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:04.753 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:04.753 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.753 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:04.753 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.753 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.753 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.753 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.753 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.754 [ 00:14:04.754 { 00:14:04.754 "name": "BaseBdev2", 00:14:04.754 "aliases": [ 00:14:04.754 "300b17ea-4610-4c2f-a74c-279a1a99e495" 00:14:04.754 ], 00:14:04.754 "product_name": "Malloc disk", 00:14:04.754 "block_size": 512, 00:14:04.754 "num_blocks": 65536, 00:14:04.754 "uuid": "300b17ea-4610-4c2f-a74c-279a1a99e495", 00:14:04.754 "assigned_rate_limits": { 00:14:04.754 "rw_ios_per_sec": 0, 00:14:04.754 "rw_mbytes_per_sec": 0, 00:14:04.754 "r_mbytes_per_sec": 0, 00:14:04.754 "w_mbytes_per_sec": 0 00:14:04.754 }, 00:14:04.754 "claimed": true, 00:14:04.754 "claim_type": "exclusive_write", 00:14:04.754 "zoned": false, 00:14:04.754 "supported_io_types": { 00:14:04.754 "read": true, 00:14:04.754 "write": true, 00:14:04.754 "unmap": true, 00:14:04.754 "flush": true, 00:14:04.754 "reset": true, 00:14:04.754 "nvme_admin": false, 00:14:04.754 "nvme_io": false, 00:14:04.754 "nvme_io_md": false, 00:14:04.754 "write_zeroes": true, 00:14:04.754 "zcopy": true, 00:14:04.754 "get_zone_info": false, 00:14:04.754 "zone_management": false, 00:14:04.754 "zone_append": false, 00:14:04.754 "compare": false, 00:14:04.754 "compare_and_write": false, 00:14:04.754 "abort": true, 00:14:04.754 "seek_hole": false, 00:14:04.754 "seek_data": false, 00:14:04.754 "copy": true, 00:14:04.754 "nvme_iov_md": false 00:14:04.754 }, 00:14:04.754 "memory_domains": [ 00:14:04.754 { 00:14:04.754 "dma_device_id": "system", 00:14:04.754 "dma_device_type": 1 00:14:04.754 }, 00:14:04.754 { 00:14:04.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.754 "dma_device_type": 2 00:14:04.754 } 00:14:04.754 ], 00:14:04.754 "driver_specific": {} 00:14:04.754 } 00:14:04.754 ] 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.754 "name": "Existed_Raid", 00:14:04.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.754 "strip_size_kb": 64, 00:14:04.754 "state": "configuring", 00:14:04.754 "raid_level": "raid0", 00:14:04.754 "superblock": false, 00:14:04.754 "num_base_bdevs": 3, 00:14:04.754 "num_base_bdevs_discovered": 2, 00:14:04.754 "num_base_bdevs_operational": 3, 00:14:04.754 "base_bdevs_list": [ 00:14:04.754 { 00:14:04.754 "name": "BaseBdev1", 00:14:04.754 "uuid": "fa8f3b9f-0a99-4512-8493-296f66bbbc77", 00:14:04.754 "is_configured": true, 00:14:04.754 "data_offset": 0, 00:14:04.754 "data_size": 65536 00:14:04.754 }, 00:14:04.754 { 00:14:04.754 "name": "BaseBdev2", 00:14:04.754 "uuid": "300b17ea-4610-4c2f-a74c-279a1a99e495", 00:14:04.754 "is_configured": true, 00:14:04.754 "data_offset": 0, 00:14:04.754 "data_size": 65536 00:14:04.754 }, 00:14:04.754 { 00:14:04.754 "name": "BaseBdev3", 00:14:04.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.754 "is_configured": false, 00:14:04.754 "data_offset": 0, 00:14:04.754 "data_size": 0 00:14:04.754 } 00:14:04.754 ] 00:14:04.754 }' 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.754 13:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.321 [2024-12-06 13:08:52.136563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.321 [2024-12-06 13:08:52.136799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:05.321 [2024-12-06 13:08:52.136845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:05.321 [2024-12-06 13:08:52.137218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:05.321 [2024-12-06 13:08:52.137453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:05.321 [2024-12-06 13:08:52.137494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:05.321 [2024-12-06 13:08:52.137831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.321 BaseBdev3 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.321 [ 00:14:05.321 { 00:14:05.321 "name": "BaseBdev3", 00:14:05.321 "aliases": [ 00:14:05.321 "c0621501-5457-45d6-8138-2d92fc4b067b" 00:14:05.321 ], 00:14:05.321 "product_name": "Malloc disk", 00:14:05.321 "block_size": 512, 00:14:05.321 "num_blocks": 65536, 00:14:05.321 "uuid": "c0621501-5457-45d6-8138-2d92fc4b067b", 00:14:05.321 "assigned_rate_limits": { 00:14:05.321 "rw_ios_per_sec": 0, 00:14:05.321 "rw_mbytes_per_sec": 0, 00:14:05.321 "r_mbytes_per_sec": 0, 00:14:05.321 "w_mbytes_per_sec": 0 00:14:05.321 }, 00:14:05.321 "claimed": true, 00:14:05.321 "claim_type": "exclusive_write", 00:14:05.321 "zoned": false, 00:14:05.321 "supported_io_types": { 00:14:05.321 "read": true, 00:14:05.321 "write": true, 00:14:05.321 "unmap": true, 00:14:05.321 "flush": true, 00:14:05.321 "reset": true, 00:14:05.321 "nvme_admin": false, 00:14:05.321 "nvme_io": false, 00:14:05.321 "nvme_io_md": false, 00:14:05.321 "write_zeroes": true, 00:14:05.321 "zcopy": true, 00:14:05.321 "get_zone_info": false, 00:14:05.321 "zone_management": false, 00:14:05.321 "zone_append": false, 00:14:05.321 "compare": false, 00:14:05.321 "compare_and_write": false, 00:14:05.321 "abort": true, 00:14:05.321 "seek_hole": false, 00:14:05.321 "seek_data": false, 00:14:05.321 "copy": true, 00:14:05.321 "nvme_iov_md": false 00:14:05.321 }, 00:14:05.321 "memory_domains": [ 00:14:05.321 { 00:14:05.321 "dma_device_id": "system", 00:14:05.321 "dma_device_type": 1 00:14:05.321 }, 00:14:05.321 { 00:14:05.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.321 "dma_device_type": 2 00:14:05.321 } 00:14:05.321 ], 00:14:05.321 "driver_specific": {} 00:14:05.321 } 00:14:05.321 ] 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.321 "name": "Existed_Raid", 00:14:05.321 "uuid": "53f63f81-d65d-430f-8580-1d37f3ad49f4", 00:14:05.321 "strip_size_kb": 64, 00:14:05.321 "state": "online", 00:14:05.321 "raid_level": "raid0", 00:14:05.321 "superblock": false, 00:14:05.321 "num_base_bdevs": 3, 00:14:05.321 "num_base_bdevs_discovered": 3, 00:14:05.321 "num_base_bdevs_operational": 3, 00:14:05.321 "base_bdevs_list": [ 00:14:05.321 { 00:14:05.321 "name": "BaseBdev1", 00:14:05.321 "uuid": "fa8f3b9f-0a99-4512-8493-296f66bbbc77", 00:14:05.321 "is_configured": true, 00:14:05.321 "data_offset": 0, 00:14:05.321 "data_size": 65536 00:14:05.321 }, 00:14:05.321 { 00:14:05.321 "name": "BaseBdev2", 00:14:05.321 "uuid": "300b17ea-4610-4c2f-a74c-279a1a99e495", 00:14:05.321 "is_configured": true, 00:14:05.321 "data_offset": 0, 00:14:05.321 "data_size": 65536 00:14:05.321 }, 00:14:05.321 { 00:14:05.321 "name": "BaseBdev3", 00:14:05.321 "uuid": "c0621501-5457-45d6-8138-2d92fc4b067b", 00:14:05.321 "is_configured": true, 00:14:05.321 "data_offset": 0, 00:14:05.321 "data_size": 65536 00:14:05.321 } 00:14:05.321 ] 00:14:05.321 }' 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.321 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.887 [2024-12-06 13:08:52.677122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.887 "name": "Existed_Raid", 00:14:05.887 "aliases": [ 00:14:05.887 "53f63f81-d65d-430f-8580-1d37f3ad49f4" 00:14:05.887 ], 00:14:05.887 "product_name": "Raid Volume", 00:14:05.887 "block_size": 512, 00:14:05.887 "num_blocks": 196608, 00:14:05.887 "uuid": "53f63f81-d65d-430f-8580-1d37f3ad49f4", 00:14:05.887 "assigned_rate_limits": { 00:14:05.887 "rw_ios_per_sec": 0, 00:14:05.887 "rw_mbytes_per_sec": 0, 00:14:05.887 "r_mbytes_per_sec": 0, 00:14:05.887 "w_mbytes_per_sec": 0 00:14:05.887 }, 00:14:05.887 "claimed": false, 00:14:05.887 "zoned": false, 00:14:05.887 "supported_io_types": { 00:14:05.887 "read": true, 00:14:05.887 "write": true, 00:14:05.887 "unmap": true, 00:14:05.887 "flush": true, 00:14:05.887 "reset": true, 00:14:05.887 "nvme_admin": false, 00:14:05.887 "nvme_io": false, 00:14:05.887 "nvme_io_md": false, 00:14:05.887 "write_zeroes": true, 00:14:05.887 "zcopy": false, 00:14:05.887 "get_zone_info": false, 00:14:05.887 "zone_management": false, 00:14:05.887 "zone_append": false, 00:14:05.887 "compare": false, 00:14:05.887 "compare_and_write": false, 00:14:05.887 "abort": false, 00:14:05.887 "seek_hole": false, 00:14:05.887 "seek_data": false, 00:14:05.887 "copy": false, 00:14:05.887 "nvme_iov_md": false 00:14:05.887 }, 00:14:05.887 "memory_domains": [ 00:14:05.887 { 00:14:05.887 "dma_device_id": "system", 00:14:05.887 "dma_device_type": 1 00:14:05.887 }, 00:14:05.887 { 00:14:05.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.887 "dma_device_type": 2 00:14:05.887 }, 00:14:05.887 { 00:14:05.887 "dma_device_id": "system", 00:14:05.887 "dma_device_type": 1 00:14:05.887 }, 00:14:05.887 { 00:14:05.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.887 "dma_device_type": 2 00:14:05.887 }, 00:14:05.887 { 00:14:05.887 "dma_device_id": "system", 00:14:05.887 "dma_device_type": 1 00:14:05.887 }, 00:14:05.887 { 00:14:05.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.887 "dma_device_type": 2 00:14:05.887 } 00:14:05.887 ], 00:14:05.887 "driver_specific": { 00:14:05.887 "raid": { 00:14:05.887 "uuid": "53f63f81-d65d-430f-8580-1d37f3ad49f4", 00:14:05.887 "strip_size_kb": 64, 00:14:05.887 "state": "online", 00:14:05.887 "raid_level": "raid0", 00:14:05.887 "superblock": false, 00:14:05.887 "num_base_bdevs": 3, 00:14:05.887 "num_base_bdevs_discovered": 3, 00:14:05.887 "num_base_bdevs_operational": 3, 00:14:05.887 "base_bdevs_list": [ 00:14:05.887 { 00:14:05.887 "name": "BaseBdev1", 00:14:05.887 "uuid": "fa8f3b9f-0a99-4512-8493-296f66bbbc77", 00:14:05.887 "is_configured": true, 00:14:05.887 "data_offset": 0, 00:14:05.887 "data_size": 65536 00:14:05.887 }, 00:14:05.887 { 00:14:05.887 "name": "BaseBdev2", 00:14:05.887 "uuid": "300b17ea-4610-4c2f-a74c-279a1a99e495", 00:14:05.887 "is_configured": true, 00:14:05.887 "data_offset": 0, 00:14:05.887 "data_size": 65536 00:14:05.887 }, 00:14:05.887 { 00:14:05.887 "name": "BaseBdev3", 00:14:05.887 "uuid": "c0621501-5457-45d6-8138-2d92fc4b067b", 00:14:05.887 "is_configured": true, 00:14:05.887 "data_offset": 0, 00:14:05.887 "data_size": 65536 00:14:05.887 } 00:14:05.887 ] 00:14:05.887 } 00:14:05.887 } 00:14:05.887 }' 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:05.887 BaseBdev2 00:14:05.887 BaseBdev3' 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.887 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.146 13:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.146 [2024-12-06 13:08:52.980875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.146 [2024-12-06 13:08:52.980910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.146 [2024-12-06 13:08:52.980981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.146 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.146 "name": "Existed_Raid", 00:14:06.146 "uuid": "53f63f81-d65d-430f-8580-1d37f3ad49f4", 00:14:06.146 "strip_size_kb": 64, 00:14:06.146 "state": "offline", 00:14:06.146 "raid_level": "raid0", 00:14:06.146 "superblock": false, 00:14:06.146 "num_base_bdevs": 3, 00:14:06.146 "num_base_bdevs_discovered": 2, 00:14:06.146 "num_base_bdevs_operational": 2, 00:14:06.146 "base_bdevs_list": [ 00:14:06.146 { 00:14:06.146 "name": null, 00:14:06.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.146 "is_configured": false, 00:14:06.146 "data_offset": 0, 00:14:06.146 "data_size": 65536 00:14:06.146 }, 00:14:06.146 { 00:14:06.146 "name": "BaseBdev2", 00:14:06.147 "uuid": "300b17ea-4610-4c2f-a74c-279a1a99e495", 00:14:06.147 "is_configured": true, 00:14:06.147 "data_offset": 0, 00:14:06.147 "data_size": 65536 00:14:06.147 }, 00:14:06.147 { 00:14:06.147 "name": "BaseBdev3", 00:14:06.147 "uuid": "c0621501-5457-45d6-8138-2d92fc4b067b", 00:14:06.147 "is_configured": true, 00:14:06.147 "data_offset": 0, 00:14:06.147 "data_size": 65536 00:14:06.147 } 00:14:06.147 ] 00:14:06.147 }' 00:14:06.147 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.147 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.712 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.712 [2024-12-06 13:08:53.667773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.970 [2024-12-06 13:08:53.811912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:06.970 [2024-12-06 13:08:53.811982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.970 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.228 BaseBdev2 00:14:07.228 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.228 13:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:07.228 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:07.228 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.228 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:07.228 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.228 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.228 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.228 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.228 13:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.228 [ 00:14:07.228 { 00:14:07.228 "name": "BaseBdev2", 00:14:07.228 "aliases": [ 00:14:07.228 "99f09d4c-f7cd-4921-bb7c-ccb5f5d039f9" 00:14:07.228 ], 00:14:07.228 "product_name": "Malloc disk", 00:14:07.228 "block_size": 512, 00:14:07.228 "num_blocks": 65536, 00:14:07.228 "uuid": "99f09d4c-f7cd-4921-bb7c-ccb5f5d039f9", 00:14:07.228 "assigned_rate_limits": { 00:14:07.228 "rw_ios_per_sec": 0, 00:14:07.228 "rw_mbytes_per_sec": 0, 00:14:07.228 "r_mbytes_per_sec": 0, 00:14:07.228 "w_mbytes_per_sec": 0 00:14:07.228 }, 00:14:07.228 "claimed": false, 00:14:07.228 "zoned": false, 00:14:07.228 "supported_io_types": { 00:14:07.228 "read": true, 00:14:07.228 "write": true, 00:14:07.228 "unmap": true, 00:14:07.228 "flush": true, 00:14:07.228 "reset": true, 00:14:07.228 "nvme_admin": false, 00:14:07.228 "nvme_io": false, 00:14:07.228 "nvme_io_md": false, 00:14:07.228 "write_zeroes": true, 00:14:07.228 "zcopy": true, 00:14:07.228 "get_zone_info": false, 00:14:07.228 "zone_management": false, 00:14:07.228 "zone_append": false, 00:14:07.228 "compare": false, 00:14:07.228 "compare_and_write": false, 00:14:07.228 "abort": true, 00:14:07.228 "seek_hole": false, 00:14:07.228 "seek_data": false, 00:14:07.228 "copy": true, 00:14:07.228 "nvme_iov_md": false 00:14:07.228 }, 00:14:07.228 "memory_domains": [ 00:14:07.228 { 00:14:07.228 "dma_device_id": "system", 00:14:07.228 "dma_device_type": 1 00:14:07.228 }, 00:14:07.228 { 00:14:07.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.228 "dma_device_type": 2 00:14:07.228 } 00:14:07.228 ], 00:14:07.228 "driver_specific": {} 00:14:07.228 } 00:14:07.228 ] 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.228 BaseBdev3 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:07.228 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.229 [ 00:14:07.229 { 00:14:07.229 "name": "BaseBdev3", 00:14:07.229 "aliases": [ 00:14:07.229 "c6610467-a2d8-4ab9-8883-abfb135f61b9" 00:14:07.229 ], 00:14:07.229 "product_name": "Malloc disk", 00:14:07.229 "block_size": 512, 00:14:07.229 "num_blocks": 65536, 00:14:07.229 "uuid": "c6610467-a2d8-4ab9-8883-abfb135f61b9", 00:14:07.229 "assigned_rate_limits": { 00:14:07.229 "rw_ios_per_sec": 0, 00:14:07.229 "rw_mbytes_per_sec": 0, 00:14:07.229 "r_mbytes_per_sec": 0, 00:14:07.229 "w_mbytes_per_sec": 0 00:14:07.229 }, 00:14:07.229 "claimed": false, 00:14:07.229 "zoned": false, 00:14:07.229 "supported_io_types": { 00:14:07.229 "read": true, 00:14:07.229 "write": true, 00:14:07.229 "unmap": true, 00:14:07.229 "flush": true, 00:14:07.229 "reset": true, 00:14:07.229 "nvme_admin": false, 00:14:07.229 "nvme_io": false, 00:14:07.229 "nvme_io_md": false, 00:14:07.229 "write_zeroes": true, 00:14:07.229 "zcopy": true, 00:14:07.229 "get_zone_info": false, 00:14:07.229 "zone_management": false, 00:14:07.229 "zone_append": false, 00:14:07.229 "compare": false, 00:14:07.229 "compare_and_write": false, 00:14:07.229 "abort": true, 00:14:07.229 "seek_hole": false, 00:14:07.229 "seek_data": false, 00:14:07.229 "copy": true, 00:14:07.229 "nvme_iov_md": false 00:14:07.229 }, 00:14:07.229 "memory_domains": [ 00:14:07.229 { 00:14:07.229 "dma_device_id": "system", 00:14:07.229 "dma_device_type": 1 00:14:07.229 }, 00:14:07.229 { 00:14:07.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.229 "dma_device_type": 2 00:14:07.229 } 00:14:07.229 ], 00:14:07.229 "driver_specific": {} 00:14:07.229 } 00:14:07.229 ] 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.229 [2024-12-06 13:08:54.108528] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.229 [2024-12-06 13:08:54.108581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.229 [2024-12-06 13:08:54.108612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.229 [2024-12-06 13:08:54.110939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.229 "name": "Existed_Raid", 00:14:07.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.229 "strip_size_kb": 64, 00:14:07.229 "state": "configuring", 00:14:07.229 "raid_level": "raid0", 00:14:07.229 "superblock": false, 00:14:07.229 "num_base_bdevs": 3, 00:14:07.229 "num_base_bdevs_discovered": 2, 00:14:07.229 "num_base_bdevs_operational": 3, 00:14:07.229 "base_bdevs_list": [ 00:14:07.229 { 00:14:07.229 "name": "BaseBdev1", 00:14:07.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.229 "is_configured": false, 00:14:07.229 "data_offset": 0, 00:14:07.229 "data_size": 0 00:14:07.229 }, 00:14:07.229 { 00:14:07.229 "name": "BaseBdev2", 00:14:07.229 "uuid": "99f09d4c-f7cd-4921-bb7c-ccb5f5d039f9", 00:14:07.229 "is_configured": true, 00:14:07.229 "data_offset": 0, 00:14:07.229 "data_size": 65536 00:14:07.229 }, 00:14:07.229 { 00:14:07.229 "name": "BaseBdev3", 00:14:07.229 "uuid": "c6610467-a2d8-4ab9-8883-abfb135f61b9", 00:14:07.229 "is_configured": true, 00:14:07.229 "data_offset": 0, 00:14:07.229 "data_size": 65536 00:14:07.229 } 00:14:07.229 ] 00:14:07.229 }' 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.229 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.795 [2024-12-06 13:08:54.612701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.795 "name": "Existed_Raid", 00:14:07.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.795 "strip_size_kb": 64, 00:14:07.795 "state": "configuring", 00:14:07.795 "raid_level": "raid0", 00:14:07.795 "superblock": false, 00:14:07.795 "num_base_bdevs": 3, 00:14:07.795 "num_base_bdevs_discovered": 1, 00:14:07.795 "num_base_bdevs_operational": 3, 00:14:07.795 "base_bdevs_list": [ 00:14:07.795 { 00:14:07.795 "name": "BaseBdev1", 00:14:07.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.795 "is_configured": false, 00:14:07.795 "data_offset": 0, 00:14:07.795 "data_size": 0 00:14:07.795 }, 00:14:07.795 { 00:14:07.795 "name": null, 00:14:07.795 "uuid": "99f09d4c-f7cd-4921-bb7c-ccb5f5d039f9", 00:14:07.795 "is_configured": false, 00:14:07.795 "data_offset": 0, 00:14:07.795 "data_size": 65536 00:14:07.795 }, 00:14:07.795 { 00:14:07.795 "name": "BaseBdev3", 00:14:07.795 "uuid": "c6610467-a2d8-4ab9-8883-abfb135f61b9", 00:14:07.795 "is_configured": true, 00:14:07.795 "data_offset": 0, 00:14:07.795 "data_size": 65536 00:14:07.795 } 00:14:07.795 ] 00:14:07.795 }' 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.795 13:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.361 [2024-12-06 13:08:55.234397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.361 BaseBdev1 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.361 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.361 [ 00:14:08.361 { 00:14:08.361 "name": "BaseBdev1", 00:14:08.361 "aliases": [ 00:14:08.361 "0f022d4d-912b-421e-8193-042547d68776" 00:14:08.361 ], 00:14:08.361 "product_name": "Malloc disk", 00:14:08.361 "block_size": 512, 00:14:08.361 "num_blocks": 65536, 00:14:08.361 "uuid": "0f022d4d-912b-421e-8193-042547d68776", 00:14:08.361 "assigned_rate_limits": { 00:14:08.361 "rw_ios_per_sec": 0, 00:14:08.361 "rw_mbytes_per_sec": 0, 00:14:08.361 "r_mbytes_per_sec": 0, 00:14:08.361 "w_mbytes_per_sec": 0 00:14:08.361 }, 00:14:08.361 "claimed": true, 00:14:08.361 "claim_type": "exclusive_write", 00:14:08.361 "zoned": false, 00:14:08.361 "supported_io_types": { 00:14:08.361 "read": true, 00:14:08.361 "write": true, 00:14:08.361 "unmap": true, 00:14:08.361 "flush": true, 00:14:08.361 "reset": true, 00:14:08.361 "nvme_admin": false, 00:14:08.361 "nvme_io": false, 00:14:08.361 "nvme_io_md": false, 00:14:08.361 "write_zeroes": true, 00:14:08.361 "zcopy": true, 00:14:08.361 "get_zone_info": false, 00:14:08.361 "zone_management": false, 00:14:08.361 "zone_append": false, 00:14:08.361 "compare": false, 00:14:08.361 "compare_and_write": false, 00:14:08.361 "abort": true, 00:14:08.361 "seek_hole": false, 00:14:08.361 "seek_data": false, 00:14:08.361 "copy": true, 00:14:08.361 "nvme_iov_md": false 00:14:08.361 }, 00:14:08.361 "memory_domains": [ 00:14:08.361 { 00:14:08.362 "dma_device_id": "system", 00:14:08.362 "dma_device_type": 1 00:14:08.362 }, 00:14:08.362 { 00:14:08.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.362 "dma_device_type": 2 00:14:08.362 } 00:14:08.362 ], 00:14:08.362 "driver_specific": {} 00:14:08.362 } 00:14:08.362 ] 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.362 "name": "Existed_Raid", 00:14:08.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.362 "strip_size_kb": 64, 00:14:08.362 "state": "configuring", 00:14:08.362 "raid_level": "raid0", 00:14:08.362 "superblock": false, 00:14:08.362 "num_base_bdevs": 3, 00:14:08.362 "num_base_bdevs_discovered": 2, 00:14:08.362 "num_base_bdevs_operational": 3, 00:14:08.362 "base_bdevs_list": [ 00:14:08.362 { 00:14:08.362 "name": "BaseBdev1", 00:14:08.362 "uuid": "0f022d4d-912b-421e-8193-042547d68776", 00:14:08.362 "is_configured": true, 00:14:08.362 "data_offset": 0, 00:14:08.362 "data_size": 65536 00:14:08.362 }, 00:14:08.362 { 00:14:08.362 "name": null, 00:14:08.362 "uuid": "99f09d4c-f7cd-4921-bb7c-ccb5f5d039f9", 00:14:08.362 "is_configured": false, 00:14:08.362 "data_offset": 0, 00:14:08.362 "data_size": 65536 00:14:08.362 }, 00:14:08.362 { 00:14:08.362 "name": "BaseBdev3", 00:14:08.362 "uuid": "c6610467-a2d8-4ab9-8883-abfb135f61b9", 00:14:08.362 "is_configured": true, 00:14:08.362 "data_offset": 0, 00:14:08.362 "data_size": 65536 00:14:08.362 } 00:14:08.362 ] 00:14:08.362 }' 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.362 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.944 [2024-12-06 13:08:55.826613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.944 "name": "Existed_Raid", 00:14:08.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.944 "strip_size_kb": 64, 00:14:08.944 "state": "configuring", 00:14:08.944 "raid_level": "raid0", 00:14:08.944 "superblock": false, 00:14:08.944 "num_base_bdevs": 3, 00:14:08.944 "num_base_bdevs_discovered": 1, 00:14:08.944 "num_base_bdevs_operational": 3, 00:14:08.944 "base_bdevs_list": [ 00:14:08.944 { 00:14:08.944 "name": "BaseBdev1", 00:14:08.944 "uuid": "0f022d4d-912b-421e-8193-042547d68776", 00:14:08.944 "is_configured": true, 00:14:08.944 "data_offset": 0, 00:14:08.944 "data_size": 65536 00:14:08.944 }, 00:14:08.944 { 00:14:08.944 "name": null, 00:14:08.944 "uuid": "99f09d4c-f7cd-4921-bb7c-ccb5f5d039f9", 00:14:08.944 "is_configured": false, 00:14:08.944 "data_offset": 0, 00:14:08.944 "data_size": 65536 00:14:08.944 }, 00:14:08.944 { 00:14:08.944 "name": null, 00:14:08.944 "uuid": "c6610467-a2d8-4ab9-8883-abfb135f61b9", 00:14:08.944 "is_configured": false, 00:14:08.944 "data_offset": 0, 00:14:08.944 "data_size": 65536 00:14:08.944 } 00:14:08.944 ] 00:14:08.944 }' 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.944 13:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.510 [2024-12-06 13:08:56.362806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.510 "name": "Existed_Raid", 00:14:09.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.510 "strip_size_kb": 64, 00:14:09.510 "state": "configuring", 00:14:09.510 "raid_level": "raid0", 00:14:09.510 "superblock": false, 00:14:09.510 "num_base_bdevs": 3, 00:14:09.510 "num_base_bdevs_discovered": 2, 00:14:09.510 "num_base_bdevs_operational": 3, 00:14:09.510 "base_bdevs_list": [ 00:14:09.510 { 00:14:09.510 "name": "BaseBdev1", 00:14:09.510 "uuid": "0f022d4d-912b-421e-8193-042547d68776", 00:14:09.510 "is_configured": true, 00:14:09.510 "data_offset": 0, 00:14:09.510 "data_size": 65536 00:14:09.510 }, 00:14:09.510 { 00:14:09.510 "name": null, 00:14:09.510 "uuid": "99f09d4c-f7cd-4921-bb7c-ccb5f5d039f9", 00:14:09.510 "is_configured": false, 00:14:09.510 "data_offset": 0, 00:14:09.510 "data_size": 65536 00:14:09.510 }, 00:14:09.510 { 00:14:09.510 "name": "BaseBdev3", 00:14:09.510 "uuid": "c6610467-a2d8-4ab9-8883-abfb135f61b9", 00:14:09.510 "is_configured": true, 00:14:09.510 "data_offset": 0, 00:14:09.510 "data_size": 65536 00:14:09.510 } 00:14:09.510 ] 00:14:09.510 }' 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.510 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.127 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.127 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.127 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.127 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:10.127 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.127 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:10.127 13:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:10.127 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.127 13:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.127 [2024-12-06 13:08:56.950988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.127 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.127 "name": "Existed_Raid", 00:14:10.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.127 "strip_size_kb": 64, 00:14:10.128 "state": "configuring", 00:14:10.128 "raid_level": "raid0", 00:14:10.128 "superblock": false, 00:14:10.128 "num_base_bdevs": 3, 00:14:10.128 "num_base_bdevs_discovered": 1, 00:14:10.128 "num_base_bdevs_operational": 3, 00:14:10.128 "base_bdevs_list": [ 00:14:10.128 { 00:14:10.128 "name": null, 00:14:10.128 "uuid": "0f022d4d-912b-421e-8193-042547d68776", 00:14:10.128 "is_configured": false, 00:14:10.128 "data_offset": 0, 00:14:10.128 "data_size": 65536 00:14:10.128 }, 00:14:10.128 { 00:14:10.128 "name": null, 00:14:10.128 "uuid": "99f09d4c-f7cd-4921-bb7c-ccb5f5d039f9", 00:14:10.128 "is_configured": false, 00:14:10.128 "data_offset": 0, 00:14:10.128 "data_size": 65536 00:14:10.128 }, 00:14:10.128 { 00:14:10.128 "name": "BaseBdev3", 00:14:10.128 "uuid": "c6610467-a2d8-4ab9-8883-abfb135f61b9", 00:14:10.128 "is_configured": true, 00:14:10.128 "data_offset": 0, 00:14:10.128 "data_size": 65536 00:14:10.128 } 00:14:10.128 ] 00:14:10.128 }' 00:14:10.128 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.128 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.694 [2024-12-06 13:08:57.591622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.694 "name": "Existed_Raid", 00:14:10.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.694 "strip_size_kb": 64, 00:14:10.694 "state": "configuring", 00:14:10.694 "raid_level": "raid0", 00:14:10.694 "superblock": false, 00:14:10.694 "num_base_bdevs": 3, 00:14:10.694 "num_base_bdevs_discovered": 2, 00:14:10.694 "num_base_bdevs_operational": 3, 00:14:10.694 "base_bdevs_list": [ 00:14:10.694 { 00:14:10.694 "name": null, 00:14:10.694 "uuid": "0f022d4d-912b-421e-8193-042547d68776", 00:14:10.694 "is_configured": false, 00:14:10.694 "data_offset": 0, 00:14:10.694 "data_size": 65536 00:14:10.694 }, 00:14:10.694 { 00:14:10.694 "name": "BaseBdev2", 00:14:10.694 "uuid": "99f09d4c-f7cd-4921-bb7c-ccb5f5d039f9", 00:14:10.694 "is_configured": true, 00:14:10.694 "data_offset": 0, 00:14:10.694 "data_size": 65536 00:14:10.694 }, 00:14:10.694 { 00:14:10.694 "name": "BaseBdev3", 00:14:10.694 "uuid": "c6610467-a2d8-4ab9-8883-abfb135f61b9", 00:14:10.694 "is_configured": true, 00:14:10.694 "data_offset": 0, 00:14:10.694 "data_size": 65536 00:14:10.694 } 00:14:10.694 ] 00:14:10.694 }' 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.694 13:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0f022d4d-912b-421e-8193-042547d68776 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.260 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.260 [2024-12-06 13:08:58.241657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:11.260 [2024-12-06 13:08:58.241718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:11.260 [2024-12-06 13:08:58.241735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:11.260 [2024-12-06 13:08:58.242054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:11.260 [2024-12-06 13:08:58.242252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:11.261 [2024-12-06 13:08:58.242277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:11.261 [2024-12-06 13:08:58.242613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.261 NewBaseBdev 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.261 [ 00:14:11.261 { 00:14:11.261 "name": "NewBaseBdev", 00:14:11.261 "aliases": [ 00:14:11.261 "0f022d4d-912b-421e-8193-042547d68776" 00:14:11.261 ], 00:14:11.261 "product_name": "Malloc disk", 00:14:11.261 "block_size": 512, 00:14:11.261 "num_blocks": 65536, 00:14:11.261 "uuid": "0f022d4d-912b-421e-8193-042547d68776", 00:14:11.261 "assigned_rate_limits": { 00:14:11.261 "rw_ios_per_sec": 0, 00:14:11.261 "rw_mbytes_per_sec": 0, 00:14:11.261 "r_mbytes_per_sec": 0, 00:14:11.261 "w_mbytes_per_sec": 0 00:14:11.261 }, 00:14:11.261 "claimed": true, 00:14:11.261 "claim_type": "exclusive_write", 00:14:11.261 "zoned": false, 00:14:11.261 "supported_io_types": { 00:14:11.261 "read": true, 00:14:11.261 "write": true, 00:14:11.261 "unmap": true, 00:14:11.261 "flush": true, 00:14:11.261 "reset": true, 00:14:11.261 "nvme_admin": false, 00:14:11.261 "nvme_io": false, 00:14:11.261 "nvme_io_md": false, 00:14:11.261 "write_zeroes": true, 00:14:11.261 "zcopy": true, 00:14:11.261 "get_zone_info": false, 00:14:11.261 "zone_management": false, 00:14:11.261 "zone_append": false, 00:14:11.261 "compare": false, 00:14:11.261 "compare_and_write": false, 00:14:11.261 "abort": true, 00:14:11.261 "seek_hole": false, 00:14:11.261 "seek_data": false, 00:14:11.261 "copy": true, 00:14:11.261 "nvme_iov_md": false 00:14:11.261 }, 00:14:11.261 "memory_domains": [ 00:14:11.261 { 00:14:11.261 "dma_device_id": "system", 00:14:11.261 "dma_device_type": 1 00:14:11.261 }, 00:14:11.261 { 00:14:11.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.261 "dma_device_type": 2 00:14:11.261 } 00:14:11.261 ], 00:14:11.261 "driver_specific": {} 00:14:11.261 } 00:14:11.261 ] 00:14:11.261 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.520 "name": "Existed_Raid", 00:14:11.520 "uuid": "78cd1d6a-c24b-4ac1-bbd7-3cb8078fa4db", 00:14:11.520 "strip_size_kb": 64, 00:14:11.520 "state": "online", 00:14:11.520 "raid_level": "raid0", 00:14:11.520 "superblock": false, 00:14:11.520 "num_base_bdevs": 3, 00:14:11.520 "num_base_bdevs_discovered": 3, 00:14:11.520 "num_base_bdevs_operational": 3, 00:14:11.520 "base_bdevs_list": [ 00:14:11.520 { 00:14:11.520 "name": "NewBaseBdev", 00:14:11.520 "uuid": "0f022d4d-912b-421e-8193-042547d68776", 00:14:11.520 "is_configured": true, 00:14:11.520 "data_offset": 0, 00:14:11.520 "data_size": 65536 00:14:11.520 }, 00:14:11.520 { 00:14:11.520 "name": "BaseBdev2", 00:14:11.520 "uuid": "99f09d4c-f7cd-4921-bb7c-ccb5f5d039f9", 00:14:11.520 "is_configured": true, 00:14:11.520 "data_offset": 0, 00:14:11.520 "data_size": 65536 00:14:11.520 }, 00:14:11.520 { 00:14:11.520 "name": "BaseBdev3", 00:14:11.520 "uuid": "c6610467-a2d8-4ab9-8883-abfb135f61b9", 00:14:11.520 "is_configured": true, 00:14:11.520 "data_offset": 0, 00:14:11.520 "data_size": 65536 00:14:11.520 } 00:14:11.520 ] 00:14:11.520 }' 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.520 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.780 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.780 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:11.780 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:11.780 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:11.780 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:11.780 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:11.780 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:11.780 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:11.780 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.780 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.780 [2024-12-06 13:08:58.790223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.039 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.039 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:12.039 "name": "Existed_Raid", 00:14:12.039 "aliases": [ 00:14:12.039 "78cd1d6a-c24b-4ac1-bbd7-3cb8078fa4db" 00:14:12.039 ], 00:14:12.040 "product_name": "Raid Volume", 00:14:12.040 "block_size": 512, 00:14:12.040 "num_blocks": 196608, 00:14:12.040 "uuid": "78cd1d6a-c24b-4ac1-bbd7-3cb8078fa4db", 00:14:12.040 "assigned_rate_limits": { 00:14:12.040 "rw_ios_per_sec": 0, 00:14:12.040 "rw_mbytes_per_sec": 0, 00:14:12.040 "r_mbytes_per_sec": 0, 00:14:12.040 "w_mbytes_per_sec": 0 00:14:12.040 }, 00:14:12.040 "claimed": false, 00:14:12.040 "zoned": false, 00:14:12.040 "supported_io_types": { 00:14:12.040 "read": true, 00:14:12.040 "write": true, 00:14:12.040 "unmap": true, 00:14:12.040 "flush": true, 00:14:12.040 "reset": true, 00:14:12.040 "nvme_admin": false, 00:14:12.040 "nvme_io": false, 00:14:12.040 "nvme_io_md": false, 00:14:12.040 "write_zeroes": true, 00:14:12.040 "zcopy": false, 00:14:12.040 "get_zone_info": false, 00:14:12.040 "zone_management": false, 00:14:12.040 "zone_append": false, 00:14:12.040 "compare": false, 00:14:12.040 "compare_and_write": false, 00:14:12.040 "abort": false, 00:14:12.040 "seek_hole": false, 00:14:12.040 "seek_data": false, 00:14:12.040 "copy": false, 00:14:12.040 "nvme_iov_md": false 00:14:12.040 }, 00:14:12.040 "memory_domains": [ 00:14:12.040 { 00:14:12.040 "dma_device_id": "system", 00:14:12.040 "dma_device_type": 1 00:14:12.040 }, 00:14:12.040 { 00:14:12.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.040 "dma_device_type": 2 00:14:12.040 }, 00:14:12.040 { 00:14:12.040 "dma_device_id": "system", 00:14:12.040 "dma_device_type": 1 00:14:12.040 }, 00:14:12.040 { 00:14:12.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.040 "dma_device_type": 2 00:14:12.040 }, 00:14:12.040 { 00:14:12.040 "dma_device_id": "system", 00:14:12.040 "dma_device_type": 1 00:14:12.040 }, 00:14:12.040 { 00:14:12.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.040 "dma_device_type": 2 00:14:12.040 } 00:14:12.040 ], 00:14:12.040 "driver_specific": { 00:14:12.040 "raid": { 00:14:12.040 "uuid": "78cd1d6a-c24b-4ac1-bbd7-3cb8078fa4db", 00:14:12.040 "strip_size_kb": 64, 00:14:12.040 "state": "online", 00:14:12.040 "raid_level": "raid0", 00:14:12.040 "superblock": false, 00:14:12.040 "num_base_bdevs": 3, 00:14:12.040 "num_base_bdevs_discovered": 3, 00:14:12.040 "num_base_bdevs_operational": 3, 00:14:12.040 "base_bdevs_list": [ 00:14:12.040 { 00:14:12.040 "name": "NewBaseBdev", 00:14:12.040 "uuid": "0f022d4d-912b-421e-8193-042547d68776", 00:14:12.040 "is_configured": true, 00:14:12.040 "data_offset": 0, 00:14:12.040 "data_size": 65536 00:14:12.040 }, 00:14:12.040 { 00:14:12.040 "name": "BaseBdev2", 00:14:12.040 "uuid": "99f09d4c-f7cd-4921-bb7c-ccb5f5d039f9", 00:14:12.040 "is_configured": true, 00:14:12.040 "data_offset": 0, 00:14:12.040 "data_size": 65536 00:14:12.040 }, 00:14:12.040 { 00:14:12.040 "name": "BaseBdev3", 00:14:12.040 "uuid": "c6610467-a2d8-4ab9-8883-abfb135f61b9", 00:14:12.040 "is_configured": true, 00:14:12.040 "data_offset": 0, 00:14:12.040 "data_size": 65536 00:14:12.040 } 00:14:12.040 ] 00:14:12.040 } 00:14:12.040 } 00:14:12.040 }' 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:12.040 BaseBdev2 00:14:12.040 BaseBdev3' 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.040 13:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.040 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.040 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.040 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.040 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.040 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:12.040 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.040 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.040 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.298 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.298 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.298 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.298 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:12.298 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.298 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.298 [2024-12-06 13:08:59.093923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.299 [2024-12-06 13:08:59.093959] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.299 [2024-12-06 13:08:59.094050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.299 [2024-12-06 13:08:59.094122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.299 [2024-12-06 13:08:59.094141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64040 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64040 ']' 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64040 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64040 00:14:12.299 killing process with pid 64040 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64040' 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64040 00:14:12.299 [2024-12-06 13:08:59.128764] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.299 13:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64040 00:14:12.557 [2024-12-06 13:08:59.392497] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.492 13:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:13.492 00:14:13.492 real 0m11.698s 00:14:13.492 user 0m19.374s 00:14:13.492 sys 0m1.637s 00:14:13.492 13:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.492 ************************************ 00:14:13.492 END TEST raid_state_function_test 00:14:13.492 13:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.492 ************************************ 00:14:13.492 13:09:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:13.492 13:09:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:13.492 13:09:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.492 13:09:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.492 ************************************ 00:14:13.492 START TEST raid_state_function_test_sb 00:14:13.492 ************************************ 00:14:13.492 13:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:14:13.492 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:13.492 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:13.492 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:13.492 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:13.750 Process raid pid: 64678 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64678 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64678' 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64678 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64678 ']' 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.750 13:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.751 13:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.751 [2024-12-06 13:09:00.616007] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:14:13.751 [2024-12-06 13:09:00.616426] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.008 [2024-12-06 13:09:00.803571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.008 [2024-12-06 13:09:00.944853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.267 [2024-12-06 13:09:01.162754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.267 [2024-12-06 13:09:01.163010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.832 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.832 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:14.832 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:14.832 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.832 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.832 [2024-12-06 13:09:01.606148] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.832 [2024-12-06 13:09:01.606215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.832 [2024-12-06 13:09:01.606233] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.832 [2024-12-06 13:09:01.606249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.832 [2024-12-06 13:09:01.606259] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:14.832 [2024-12-06 13:09:01.606272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:14.832 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.832 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:14.832 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.832 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.832 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:14.832 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.833 "name": "Existed_Raid", 00:14:14.833 "uuid": "61ea1409-51fd-4bdd-9ee0-0f91dfeb6e88", 00:14:14.833 "strip_size_kb": 64, 00:14:14.833 "state": "configuring", 00:14:14.833 "raid_level": "raid0", 00:14:14.833 "superblock": true, 00:14:14.833 "num_base_bdevs": 3, 00:14:14.833 "num_base_bdevs_discovered": 0, 00:14:14.833 "num_base_bdevs_operational": 3, 00:14:14.833 "base_bdevs_list": [ 00:14:14.833 { 00:14:14.833 "name": "BaseBdev1", 00:14:14.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.833 "is_configured": false, 00:14:14.833 "data_offset": 0, 00:14:14.833 "data_size": 0 00:14:14.833 }, 00:14:14.833 { 00:14:14.833 "name": "BaseBdev2", 00:14:14.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.833 "is_configured": false, 00:14:14.833 "data_offset": 0, 00:14:14.833 "data_size": 0 00:14:14.833 }, 00:14:14.833 { 00:14:14.833 "name": "BaseBdev3", 00:14:14.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.833 "is_configured": false, 00:14:14.833 "data_offset": 0, 00:14:14.833 "data_size": 0 00:14:14.833 } 00:14:14.833 ] 00:14:14.833 }' 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.833 13:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.091 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:15.091 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.091 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.350 [2024-12-06 13:09:02.106238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.350 [2024-12-06 13:09:02.106287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.350 [2024-12-06 13:09:02.114222] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.350 [2024-12-06 13:09:02.114277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.350 [2024-12-06 13:09:02.114292] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.350 [2024-12-06 13:09:02.114309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.350 [2024-12-06 13:09:02.114318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:15.350 [2024-12-06 13:09:02.114332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.350 [2024-12-06 13:09:02.158511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.350 BaseBdev1 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.350 [ 00:14:15.350 { 00:14:15.350 "name": "BaseBdev1", 00:14:15.350 "aliases": [ 00:14:15.350 "ba39f70c-28cd-4b74-968f-899421990b51" 00:14:15.350 ], 00:14:15.350 "product_name": "Malloc disk", 00:14:15.350 "block_size": 512, 00:14:15.350 "num_blocks": 65536, 00:14:15.350 "uuid": "ba39f70c-28cd-4b74-968f-899421990b51", 00:14:15.350 "assigned_rate_limits": { 00:14:15.350 "rw_ios_per_sec": 0, 00:14:15.350 "rw_mbytes_per_sec": 0, 00:14:15.350 "r_mbytes_per_sec": 0, 00:14:15.350 "w_mbytes_per_sec": 0 00:14:15.350 }, 00:14:15.350 "claimed": true, 00:14:15.350 "claim_type": "exclusive_write", 00:14:15.350 "zoned": false, 00:14:15.350 "supported_io_types": { 00:14:15.350 "read": true, 00:14:15.350 "write": true, 00:14:15.350 "unmap": true, 00:14:15.350 "flush": true, 00:14:15.350 "reset": true, 00:14:15.350 "nvme_admin": false, 00:14:15.350 "nvme_io": false, 00:14:15.350 "nvme_io_md": false, 00:14:15.350 "write_zeroes": true, 00:14:15.350 "zcopy": true, 00:14:15.350 "get_zone_info": false, 00:14:15.350 "zone_management": false, 00:14:15.350 "zone_append": false, 00:14:15.350 "compare": false, 00:14:15.350 "compare_and_write": false, 00:14:15.350 "abort": true, 00:14:15.350 "seek_hole": false, 00:14:15.350 "seek_data": false, 00:14:15.350 "copy": true, 00:14:15.350 "nvme_iov_md": false 00:14:15.350 }, 00:14:15.350 "memory_domains": [ 00:14:15.350 { 00:14:15.350 "dma_device_id": "system", 00:14:15.350 "dma_device_type": 1 00:14:15.350 }, 00:14:15.350 { 00:14:15.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.350 "dma_device_type": 2 00:14:15.350 } 00:14:15.350 ], 00:14:15.350 "driver_specific": {} 00:14:15.350 } 00:14:15.350 ] 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.350 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.351 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.351 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.351 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.351 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.351 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.351 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.351 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.351 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.351 "name": "Existed_Raid", 00:14:15.351 "uuid": "af6b5f3b-a816-45b0-9ec1-9490be9b1834", 00:14:15.351 "strip_size_kb": 64, 00:14:15.351 "state": "configuring", 00:14:15.351 "raid_level": "raid0", 00:14:15.351 "superblock": true, 00:14:15.351 "num_base_bdevs": 3, 00:14:15.351 "num_base_bdevs_discovered": 1, 00:14:15.351 "num_base_bdevs_operational": 3, 00:14:15.351 "base_bdevs_list": [ 00:14:15.351 { 00:14:15.351 "name": "BaseBdev1", 00:14:15.351 "uuid": "ba39f70c-28cd-4b74-968f-899421990b51", 00:14:15.351 "is_configured": true, 00:14:15.351 "data_offset": 2048, 00:14:15.351 "data_size": 63488 00:14:15.351 }, 00:14:15.351 { 00:14:15.351 "name": "BaseBdev2", 00:14:15.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.351 "is_configured": false, 00:14:15.351 "data_offset": 0, 00:14:15.351 "data_size": 0 00:14:15.351 }, 00:14:15.351 { 00:14:15.351 "name": "BaseBdev3", 00:14:15.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.351 "is_configured": false, 00:14:15.351 "data_offset": 0, 00:14:15.351 "data_size": 0 00:14:15.351 } 00:14:15.351 ] 00:14:15.351 }' 00:14:15.351 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.351 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.919 [2024-12-06 13:09:02.702713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.919 [2024-12-06 13:09:02.702784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.919 [2024-12-06 13:09:02.710771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.919 [2024-12-06 13:09:02.713510] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.919 [2024-12-06 13:09:02.713563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.919 [2024-12-06 13:09:02.713580] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:15.919 [2024-12-06 13:09:02.713596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.919 "name": "Existed_Raid", 00:14:15.919 "uuid": "57b52fca-c3a6-492b-9af7-ca27689f3aaa", 00:14:15.919 "strip_size_kb": 64, 00:14:15.919 "state": "configuring", 00:14:15.919 "raid_level": "raid0", 00:14:15.919 "superblock": true, 00:14:15.919 "num_base_bdevs": 3, 00:14:15.919 "num_base_bdevs_discovered": 1, 00:14:15.919 "num_base_bdevs_operational": 3, 00:14:15.919 "base_bdevs_list": [ 00:14:15.919 { 00:14:15.919 "name": "BaseBdev1", 00:14:15.919 "uuid": "ba39f70c-28cd-4b74-968f-899421990b51", 00:14:15.919 "is_configured": true, 00:14:15.919 "data_offset": 2048, 00:14:15.919 "data_size": 63488 00:14:15.919 }, 00:14:15.919 { 00:14:15.919 "name": "BaseBdev2", 00:14:15.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.919 "is_configured": false, 00:14:15.919 "data_offset": 0, 00:14:15.919 "data_size": 0 00:14:15.919 }, 00:14:15.919 { 00:14:15.919 "name": "BaseBdev3", 00:14:15.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.919 "is_configured": false, 00:14:15.919 "data_offset": 0, 00:14:15.919 "data_size": 0 00:14:15.919 } 00:14:15.919 ] 00:14:15.919 }' 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.919 13:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.317 [2024-12-06 13:09:03.253342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.317 BaseBdev2 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.317 [ 00:14:16.317 { 00:14:16.317 "name": "BaseBdev2", 00:14:16.317 "aliases": [ 00:14:16.317 "f888c8ba-68d9-471c-88b6-acdfb209f9d0" 00:14:16.317 ], 00:14:16.317 "product_name": "Malloc disk", 00:14:16.317 "block_size": 512, 00:14:16.317 "num_blocks": 65536, 00:14:16.317 "uuid": "f888c8ba-68d9-471c-88b6-acdfb209f9d0", 00:14:16.317 "assigned_rate_limits": { 00:14:16.317 "rw_ios_per_sec": 0, 00:14:16.317 "rw_mbytes_per_sec": 0, 00:14:16.317 "r_mbytes_per_sec": 0, 00:14:16.317 "w_mbytes_per_sec": 0 00:14:16.317 }, 00:14:16.317 "claimed": true, 00:14:16.317 "claim_type": "exclusive_write", 00:14:16.317 "zoned": false, 00:14:16.317 "supported_io_types": { 00:14:16.317 "read": true, 00:14:16.317 "write": true, 00:14:16.317 "unmap": true, 00:14:16.317 "flush": true, 00:14:16.317 "reset": true, 00:14:16.317 "nvme_admin": false, 00:14:16.317 "nvme_io": false, 00:14:16.317 "nvme_io_md": false, 00:14:16.317 "write_zeroes": true, 00:14:16.317 "zcopy": true, 00:14:16.317 "get_zone_info": false, 00:14:16.317 "zone_management": false, 00:14:16.317 "zone_append": false, 00:14:16.317 "compare": false, 00:14:16.317 "compare_and_write": false, 00:14:16.317 "abort": true, 00:14:16.317 "seek_hole": false, 00:14:16.317 "seek_data": false, 00:14:16.317 "copy": true, 00:14:16.317 "nvme_iov_md": false 00:14:16.317 }, 00:14:16.317 "memory_domains": [ 00:14:16.317 { 00:14:16.317 "dma_device_id": "system", 00:14:16.317 "dma_device_type": 1 00:14:16.317 }, 00:14:16.317 { 00:14:16.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.317 "dma_device_type": 2 00:14:16.317 } 00:14:16.317 ], 00:14:16.317 "driver_specific": {} 00:14:16.317 } 00:14:16.317 ] 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.317 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.318 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.318 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.578 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.578 "name": "Existed_Raid", 00:14:16.578 "uuid": "57b52fca-c3a6-492b-9af7-ca27689f3aaa", 00:14:16.578 "strip_size_kb": 64, 00:14:16.578 "state": "configuring", 00:14:16.578 "raid_level": "raid0", 00:14:16.578 "superblock": true, 00:14:16.578 "num_base_bdevs": 3, 00:14:16.578 "num_base_bdevs_discovered": 2, 00:14:16.578 "num_base_bdevs_operational": 3, 00:14:16.578 "base_bdevs_list": [ 00:14:16.578 { 00:14:16.578 "name": "BaseBdev1", 00:14:16.578 "uuid": "ba39f70c-28cd-4b74-968f-899421990b51", 00:14:16.578 "is_configured": true, 00:14:16.578 "data_offset": 2048, 00:14:16.578 "data_size": 63488 00:14:16.578 }, 00:14:16.578 { 00:14:16.578 "name": "BaseBdev2", 00:14:16.578 "uuid": "f888c8ba-68d9-471c-88b6-acdfb209f9d0", 00:14:16.578 "is_configured": true, 00:14:16.578 "data_offset": 2048, 00:14:16.578 "data_size": 63488 00:14:16.578 }, 00:14:16.578 { 00:14:16.578 "name": "BaseBdev3", 00:14:16.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.578 "is_configured": false, 00:14:16.578 "data_offset": 0, 00:14:16.578 "data_size": 0 00:14:16.578 } 00:14:16.578 ] 00:14:16.578 }' 00:14:16.578 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.578 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.837 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:16.837 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.837 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 [2024-12-06 13:09:03.866835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.096 [2024-12-06 13:09:03.867149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:17.096 [2024-12-06 13:09:03.867178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:17.096 BaseBdev3 00:14:17.096 [2024-12-06 13:09:03.867528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:17.096 [2024-12-06 13:09:03.867730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:17.096 [2024-12-06 13:09:03.867748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:17.096 [2024-12-06 13:09:03.867925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.096 [ 00:14:17.096 { 00:14:17.096 "name": "BaseBdev3", 00:14:17.096 "aliases": [ 00:14:17.096 "38fe93d2-2876-44f6-bbd5-09c7c4db6979" 00:14:17.096 ], 00:14:17.096 "product_name": "Malloc disk", 00:14:17.096 "block_size": 512, 00:14:17.096 "num_blocks": 65536, 00:14:17.096 "uuid": "38fe93d2-2876-44f6-bbd5-09c7c4db6979", 00:14:17.096 "assigned_rate_limits": { 00:14:17.096 "rw_ios_per_sec": 0, 00:14:17.096 "rw_mbytes_per_sec": 0, 00:14:17.096 "r_mbytes_per_sec": 0, 00:14:17.096 "w_mbytes_per_sec": 0 00:14:17.096 }, 00:14:17.096 "claimed": true, 00:14:17.096 "claim_type": "exclusive_write", 00:14:17.096 "zoned": false, 00:14:17.096 "supported_io_types": { 00:14:17.096 "read": true, 00:14:17.096 "write": true, 00:14:17.096 "unmap": true, 00:14:17.096 "flush": true, 00:14:17.096 "reset": true, 00:14:17.096 "nvme_admin": false, 00:14:17.096 "nvme_io": false, 00:14:17.096 "nvme_io_md": false, 00:14:17.096 "write_zeroes": true, 00:14:17.096 "zcopy": true, 00:14:17.096 "get_zone_info": false, 00:14:17.096 "zone_management": false, 00:14:17.096 "zone_append": false, 00:14:17.096 "compare": false, 00:14:17.096 "compare_and_write": false, 00:14:17.096 "abort": true, 00:14:17.096 "seek_hole": false, 00:14:17.096 "seek_data": false, 00:14:17.096 "copy": true, 00:14:17.096 "nvme_iov_md": false 00:14:17.096 }, 00:14:17.096 "memory_domains": [ 00:14:17.096 { 00:14:17.096 "dma_device_id": "system", 00:14:17.096 "dma_device_type": 1 00:14:17.096 }, 00:14:17.096 { 00:14:17.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.096 "dma_device_type": 2 00:14:17.096 } 00:14:17.096 ], 00:14:17.096 "driver_specific": {} 00:14:17.096 } 00:14:17.096 ] 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.096 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.097 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.097 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.097 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.097 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.097 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.097 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.097 "name": "Existed_Raid", 00:14:17.097 "uuid": "57b52fca-c3a6-492b-9af7-ca27689f3aaa", 00:14:17.097 "strip_size_kb": 64, 00:14:17.097 "state": "online", 00:14:17.097 "raid_level": "raid0", 00:14:17.097 "superblock": true, 00:14:17.097 "num_base_bdevs": 3, 00:14:17.097 "num_base_bdevs_discovered": 3, 00:14:17.097 "num_base_bdevs_operational": 3, 00:14:17.097 "base_bdevs_list": [ 00:14:17.097 { 00:14:17.097 "name": "BaseBdev1", 00:14:17.097 "uuid": "ba39f70c-28cd-4b74-968f-899421990b51", 00:14:17.097 "is_configured": true, 00:14:17.097 "data_offset": 2048, 00:14:17.097 "data_size": 63488 00:14:17.097 }, 00:14:17.097 { 00:14:17.097 "name": "BaseBdev2", 00:14:17.097 "uuid": "f888c8ba-68d9-471c-88b6-acdfb209f9d0", 00:14:17.097 "is_configured": true, 00:14:17.097 "data_offset": 2048, 00:14:17.097 "data_size": 63488 00:14:17.097 }, 00:14:17.097 { 00:14:17.097 "name": "BaseBdev3", 00:14:17.097 "uuid": "38fe93d2-2876-44f6-bbd5-09c7c4db6979", 00:14:17.097 "is_configured": true, 00:14:17.097 "data_offset": 2048, 00:14:17.097 "data_size": 63488 00:14:17.097 } 00:14:17.097 ] 00:14:17.097 }' 00:14:17.097 13:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.097 13:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:17.665 [2024-12-06 13:09:04.419397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:17.665 "name": "Existed_Raid", 00:14:17.665 "aliases": [ 00:14:17.665 "57b52fca-c3a6-492b-9af7-ca27689f3aaa" 00:14:17.665 ], 00:14:17.665 "product_name": "Raid Volume", 00:14:17.665 "block_size": 512, 00:14:17.665 "num_blocks": 190464, 00:14:17.665 "uuid": "57b52fca-c3a6-492b-9af7-ca27689f3aaa", 00:14:17.665 "assigned_rate_limits": { 00:14:17.665 "rw_ios_per_sec": 0, 00:14:17.665 "rw_mbytes_per_sec": 0, 00:14:17.665 "r_mbytes_per_sec": 0, 00:14:17.665 "w_mbytes_per_sec": 0 00:14:17.665 }, 00:14:17.665 "claimed": false, 00:14:17.665 "zoned": false, 00:14:17.665 "supported_io_types": { 00:14:17.665 "read": true, 00:14:17.665 "write": true, 00:14:17.665 "unmap": true, 00:14:17.665 "flush": true, 00:14:17.665 "reset": true, 00:14:17.665 "nvme_admin": false, 00:14:17.665 "nvme_io": false, 00:14:17.665 "nvme_io_md": false, 00:14:17.665 "write_zeroes": true, 00:14:17.665 "zcopy": false, 00:14:17.665 "get_zone_info": false, 00:14:17.665 "zone_management": false, 00:14:17.665 "zone_append": false, 00:14:17.665 "compare": false, 00:14:17.665 "compare_and_write": false, 00:14:17.665 "abort": false, 00:14:17.665 "seek_hole": false, 00:14:17.665 "seek_data": false, 00:14:17.665 "copy": false, 00:14:17.665 "nvme_iov_md": false 00:14:17.665 }, 00:14:17.665 "memory_domains": [ 00:14:17.665 { 00:14:17.665 "dma_device_id": "system", 00:14:17.665 "dma_device_type": 1 00:14:17.665 }, 00:14:17.665 { 00:14:17.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.665 "dma_device_type": 2 00:14:17.665 }, 00:14:17.665 { 00:14:17.665 "dma_device_id": "system", 00:14:17.665 "dma_device_type": 1 00:14:17.665 }, 00:14:17.665 { 00:14:17.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.665 "dma_device_type": 2 00:14:17.665 }, 00:14:17.665 { 00:14:17.665 "dma_device_id": "system", 00:14:17.665 "dma_device_type": 1 00:14:17.665 }, 00:14:17.665 { 00:14:17.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.665 "dma_device_type": 2 00:14:17.665 } 00:14:17.665 ], 00:14:17.665 "driver_specific": { 00:14:17.665 "raid": { 00:14:17.665 "uuid": "57b52fca-c3a6-492b-9af7-ca27689f3aaa", 00:14:17.665 "strip_size_kb": 64, 00:14:17.665 "state": "online", 00:14:17.665 "raid_level": "raid0", 00:14:17.665 "superblock": true, 00:14:17.665 "num_base_bdevs": 3, 00:14:17.665 "num_base_bdevs_discovered": 3, 00:14:17.665 "num_base_bdevs_operational": 3, 00:14:17.665 "base_bdevs_list": [ 00:14:17.665 { 00:14:17.665 "name": "BaseBdev1", 00:14:17.665 "uuid": "ba39f70c-28cd-4b74-968f-899421990b51", 00:14:17.665 "is_configured": true, 00:14:17.665 "data_offset": 2048, 00:14:17.665 "data_size": 63488 00:14:17.665 }, 00:14:17.665 { 00:14:17.665 "name": "BaseBdev2", 00:14:17.665 "uuid": "f888c8ba-68d9-471c-88b6-acdfb209f9d0", 00:14:17.665 "is_configured": true, 00:14:17.665 "data_offset": 2048, 00:14:17.665 "data_size": 63488 00:14:17.665 }, 00:14:17.665 { 00:14:17.665 "name": "BaseBdev3", 00:14:17.665 "uuid": "38fe93d2-2876-44f6-bbd5-09c7c4db6979", 00:14:17.665 "is_configured": true, 00:14:17.665 "data_offset": 2048, 00:14:17.665 "data_size": 63488 00:14:17.665 } 00:14:17.665 ] 00:14:17.665 } 00:14:17.665 } 00:14:17.665 }' 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:17.665 BaseBdev2 00:14:17.665 BaseBdev3' 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.665 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.925 [2024-12-06 13:09:04.735304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.925 [2024-12-06 13:09:04.735339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.925 [2024-12-06 13:09:04.735442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.925 "name": "Existed_Raid", 00:14:17.925 "uuid": "57b52fca-c3a6-492b-9af7-ca27689f3aaa", 00:14:17.925 "strip_size_kb": 64, 00:14:17.925 "state": "offline", 00:14:17.925 "raid_level": "raid0", 00:14:17.925 "superblock": true, 00:14:17.925 "num_base_bdevs": 3, 00:14:17.925 "num_base_bdevs_discovered": 2, 00:14:17.925 "num_base_bdevs_operational": 2, 00:14:17.925 "base_bdevs_list": [ 00:14:17.925 { 00:14:17.925 "name": null, 00:14:17.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.925 "is_configured": false, 00:14:17.925 "data_offset": 0, 00:14:17.925 "data_size": 63488 00:14:17.925 }, 00:14:17.925 { 00:14:17.925 "name": "BaseBdev2", 00:14:17.925 "uuid": "f888c8ba-68d9-471c-88b6-acdfb209f9d0", 00:14:17.925 "is_configured": true, 00:14:17.925 "data_offset": 2048, 00:14:17.925 "data_size": 63488 00:14:17.925 }, 00:14:17.925 { 00:14:17.925 "name": "BaseBdev3", 00:14:17.925 "uuid": "38fe93d2-2876-44f6-bbd5-09c7c4db6979", 00:14:17.925 "is_configured": true, 00:14:17.925 "data_offset": 2048, 00:14:17.925 "data_size": 63488 00:14:17.925 } 00:14:17.925 ] 00:14:17.925 }' 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.925 13:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.503 [2024-12-06 13:09:05.374323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.503 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.760 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:18.760 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.760 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:18.760 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.760 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.760 [2024-12-06 13:09:05.521090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:18.760 [2024-12-06 13:09:05.521167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:18.760 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.760 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:18.760 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:18.760 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.760 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.761 BaseBdev2 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.761 [ 00:14:18.761 { 00:14:18.761 "name": "BaseBdev2", 00:14:18.761 "aliases": [ 00:14:18.761 "8156db4d-6e51-4c95-be3e-aa677a0f2e37" 00:14:18.761 ], 00:14:18.761 "product_name": "Malloc disk", 00:14:18.761 "block_size": 512, 00:14:18.761 "num_blocks": 65536, 00:14:18.761 "uuid": "8156db4d-6e51-4c95-be3e-aa677a0f2e37", 00:14:18.761 "assigned_rate_limits": { 00:14:18.761 "rw_ios_per_sec": 0, 00:14:18.761 "rw_mbytes_per_sec": 0, 00:14:18.761 "r_mbytes_per_sec": 0, 00:14:18.761 "w_mbytes_per_sec": 0 00:14:18.761 }, 00:14:18.761 "claimed": false, 00:14:18.761 "zoned": false, 00:14:18.761 "supported_io_types": { 00:14:18.761 "read": true, 00:14:18.761 "write": true, 00:14:18.761 "unmap": true, 00:14:18.761 "flush": true, 00:14:18.761 "reset": true, 00:14:18.761 "nvme_admin": false, 00:14:18.761 "nvme_io": false, 00:14:18.761 "nvme_io_md": false, 00:14:18.761 "write_zeroes": true, 00:14:18.761 "zcopy": true, 00:14:18.761 "get_zone_info": false, 00:14:18.761 "zone_management": false, 00:14:18.761 "zone_append": false, 00:14:18.761 "compare": false, 00:14:18.761 "compare_and_write": false, 00:14:18.761 "abort": true, 00:14:18.761 "seek_hole": false, 00:14:18.761 "seek_data": false, 00:14:18.761 "copy": true, 00:14:18.761 "nvme_iov_md": false 00:14:18.761 }, 00:14:18.761 "memory_domains": [ 00:14:18.761 { 00:14:18.761 "dma_device_id": "system", 00:14:18.761 "dma_device_type": 1 00:14:18.761 }, 00:14:18.761 { 00:14:18.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.761 "dma_device_type": 2 00:14:18.761 } 00:14:18.761 ], 00:14:18.761 "driver_specific": {} 00:14:18.761 } 00:14:18.761 ] 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.761 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.021 BaseBdev3 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.021 [ 00:14:19.021 { 00:14:19.021 "name": "BaseBdev3", 00:14:19.021 "aliases": [ 00:14:19.021 "d6c141af-0503-454b-8689-0617507392b8" 00:14:19.021 ], 00:14:19.021 "product_name": "Malloc disk", 00:14:19.021 "block_size": 512, 00:14:19.021 "num_blocks": 65536, 00:14:19.021 "uuid": "d6c141af-0503-454b-8689-0617507392b8", 00:14:19.021 "assigned_rate_limits": { 00:14:19.021 "rw_ios_per_sec": 0, 00:14:19.021 "rw_mbytes_per_sec": 0, 00:14:19.021 "r_mbytes_per_sec": 0, 00:14:19.021 "w_mbytes_per_sec": 0 00:14:19.021 }, 00:14:19.021 "claimed": false, 00:14:19.021 "zoned": false, 00:14:19.021 "supported_io_types": { 00:14:19.021 "read": true, 00:14:19.021 "write": true, 00:14:19.021 "unmap": true, 00:14:19.021 "flush": true, 00:14:19.021 "reset": true, 00:14:19.021 "nvme_admin": false, 00:14:19.021 "nvme_io": false, 00:14:19.021 "nvme_io_md": false, 00:14:19.021 "write_zeroes": true, 00:14:19.021 "zcopy": true, 00:14:19.021 "get_zone_info": false, 00:14:19.021 "zone_management": false, 00:14:19.021 "zone_append": false, 00:14:19.021 "compare": false, 00:14:19.021 "compare_and_write": false, 00:14:19.021 "abort": true, 00:14:19.021 "seek_hole": false, 00:14:19.021 "seek_data": false, 00:14:19.021 "copy": true, 00:14:19.021 "nvme_iov_md": false 00:14:19.021 }, 00:14:19.021 "memory_domains": [ 00:14:19.021 { 00:14:19.021 "dma_device_id": "system", 00:14:19.021 "dma_device_type": 1 00:14:19.021 }, 00:14:19.021 { 00:14:19.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.021 "dma_device_type": 2 00:14:19.021 } 00:14:19.021 ], 00:14:19.021 "driver_specific": {} 00:14:19.021 } 00:14:19.021 ] 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.021 [2024-12-06 13:09:05.824237] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.021 [2024-12-06 13:09:05.824313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.021 [2024-12-06 13:09:05.824351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.021 [2024-12-06 13:09:05.826786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.021 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.021 "name": "Existed_Raid", 00:14:19.021 "uuid": "b71e3d41-e670-4508-becd-0f0d7d0e8d17", 00:14:19.021 "strip_size_kb": 64, 00:14:19.021 "state": "configuring", 00:14:19.021 "raid_level": "raid0", 00:14:19.021 "superblock": true, 00:14:19.021 "num_base_bdevs": 3, 00:14:19.021 "num_base_bdevs_discovered": 2, 00:14:19.021 "num_base_bdevs_operational": 3, 00:14:19.021 "base_bdevs_list": [ 00:14:19.021 { 00:14:19.021 "name": "BaseBdev1", 00:14:19.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.021 "is_configured": false, 00:14:19.022 "data_offset": 0, 00:14:19.022 "data_size": 0 00:14:19.022 }, 00:14:19.022 { 00:14:19.022 "name": "BaseBdev2", 00:14:19.022 "uuid": "8156db4d-6e51-4c95-be3e-aa677a0f2e37", 00:14:19.022 "is_configured": true, 00:14:19.022 "data_offset": 2048, 00:14:19.022 "data_size": 63488 00:14:19.022 }, 00:14:19.022 { 00:14:19.022 "name": "BaseBdev3", 00:14:19.022 "uuid": "d6c141af-0503-454b-8689-0617507392b8", 00:14:19.022 "is_configured": true, 00:14:19.022 "data_offset": 2048, 00:14:19.022 "data_size": 63488 00:14:19.022 } 00:14:19.022 ] 00:14:19.022 }' 00:14:19.022 13:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.022 13:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.587 [2024-12-06 13:09:06.324418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.587 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.588 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.588 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.588 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.588 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.588 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.588 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.588 "name": "Existed_Raid", 00:14:19.588 "uuid": "b71e3d41-e670-4508-becd-0f0d7d0e8d17", 00:14:19.588 "strip_size_kb": 64, 00:14:19.588 "state": "configuring", 00:14:19.588 "raid_level": "raid0", 00:14:19.588 "superblock": true, 00:14:19.588 "num_base_bdevs": 3, 00:14:19.588 "num_base_bdevs_discovered": 1, 00:14:19.588 "num_base_bdevs_operational": 3, 00:14:19.588 "base_bdevs_list": [ 00:14:19.588 { 00:14:19.588 "name": "BaseBdev1", 00:14:19.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.588 "is_configured": false, 00:14:19.588 "data_offset": 0, 00:14:19.588 "data_size": 0 00:14:19.588 }, 00:14:19.588 { 00:14:19.588 "name": null, 00:14:19.588 "uuid": "8156db4d-6e51-4c95-be3e-aa677a0f2e37", 00:14:19.588 "is_configured": false, 00:14:19.588 "data_offset": 0, 00:14:19.588 "data_size": 63488 00:14:19.588 }, 00:14:19.588 { 00:14:19.588 "name": "BaseBdev3", 00:14:19.588 "uuid": "d6c141af-0503-454b-8689-0617507392b8", 00:14:19.588 "is_configured": true, 00:14:19.588 "data_offset": 2048, 00:14:19.588 "data_size": 63488 00:14:19.588 } 00:14:19.588 ] 00:14:19.588 }' 00:14:19.588 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.588 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.846 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.846 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:19.846 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.846 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.846 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.104 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:20.104 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:20.104 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.104 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.104 [2024-12-06 13:09:06.919057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.104 BaseBdev1 00:14:20.104 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.104 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.105 [ 00:14:20.105 { 00:14:20.105 "name": "BaseBdev1", 00:14:20.105 "aliases": [ 00:14:20.105 "f6f33951-9fe2-4626-910d-d39c21395e97" 00:14:20.105 ], 00:14:20.105 "product_name": "Malloc disk", 00:14:20.105 "block_size": 512, 00:14:20.105 "num_blocks": 65536, 00:14:20.105 "uuid": "f6f33951-9fe2-4626-910d-d39c21395e97", 00:14:20.105 "assigned_rate_limits": { 00:14:20.105 "rw_ios_per_sec": 0, 00:14:20.105 "rw_mbytes_per_sec": 0, 00:14:20.105 "r_mbytes_per_sec": 0, 00:14:20.105 "w_mbytes_per_sec": 0 00:14:20.105 }, 00:14:20.105 "claimed": true, 00:14:20.105 "claim_type": "exclusive_write", 00:14:20.105 "zoned": false, 00:14:20.105 "supported_io_types": { 00:14:20.105 "read": true, 00:14:20.105 "write": true, 00:14:20.105 "unmap": true, 00:14:20.105 "flush": true, 00:14:20.105 "reset": true, 00:14:20.105 "nvme_admin": false, 00:14:20.105 "nvme_io": false, 00:14:20.105 "nvme_io_md": false, 00:14:20.105 "write_zeroes": true, 00:14:20.105 "zcopy": true, 00:14:20.105 "get_zone_info": false, 00:14:20.105 "zone_management": false, 00:14:20.105 "zone_append": false, 00:14:20.105 "compare": false, 00:14:20.105 "compare_and_write": false, 00:14:20.105 "abort": true, 00:14:20.105 "seek_hole": false, 00:14:20.105 "seek_data": false, 00:14:20.105 "copy": true, 00:14:20.105 "nvme_iov_md": false 00:14:20.105 }, 00:14:20.105 "memory_domains": [ 00:14:20.105 { 00:14:20.105 "dma_device_id": "system", 00:14:20.105 "dma_device_type": 1 00:14:20.105 }, 00:14:20.105 { 00:14:20.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.105 "dma_device_type": 2 00:14:20.105 } 00:14:20.105 ], 00:14:20.105 "driver_specific": {} 00:14:20.105 } 00:14:20.105 ] 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.105 13:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.105 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.105 "name": "Existed_Raid", 00:14:20.105 "uuid": "b71e3d41-e670-4508-becd-0f0d7d0e8d17", 00:14:20.105 "strip_size_kb": 64, 00:14:20.105 "state": "configuring", 00:14:20.105 "raid_level": "raid0", 00:14:20.105 "superblock": true, 00:14:20.105 "num_base_bdevs": 3, 00:14:20.105 "num_base_bdevs_discovered": 2, 00:14:20.105 "num_base_bdevs_operational": 3, 00:14:20.105 "base_bdevs_list": [ 00:14:20.105 { 00:14:20.105 "name": "BaseBdev1", 00:14:20.105 "uuid": "f6f33951-9fe2-4626-910d-d39c21395e97", 00:14:20.105 "is_configured": true, 00:14:20.105 "data_offset": 2048, 00:14:20.105 "data_size": 63488 00:14:20.105 }, 00:14:20.105 { 00:14:20.105 "name": null, 00:14:20.105 "uuid": "8156db4d-6e51-4c95-be3e-aa677a0f2e37", 00:14:20.105 "is_configured": false, 00:14:20.105 "data_offset": 0, 00:14:20.105 "data_size": 63488 00:14:20.105 }, 00:14:20.105 { 00:14:20.105 "name": "BaseBdev3", 00:14:20.105 "uuid": "d6c141af-0503-454b-8689-0617507392b8", 00:14:20.105 "is_configured": true, 00:14:20.105 "data_offset": 2048, 00:14:20.105 "data_size": 63488 00:14:20.105 } 00:14:20.105 ] 00:14:20.105 }' 00:14:20.105 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.105 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.672 [2024-12-06 13:09:07.511297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.672 "name": "Existed_Raid", 00:14:20.672 "uuid": "b71e3d41-e670-4508-becd-0f0d7d0e8d17", 00:14:20.672 "strip_size_kb": 64, 00:14:20.672 "state": "configuring", 00:14:20.672 "raid_level": "raid0", 00:14:20.672 "superblock": true, 00:14:20.672 "num_base_bdevs": 3, 00:14:20.672 "num_base_bdevs_discovered": 1, 00:14:20.672 "num_base_bdevs_operational": 3, 00:14:20.672 "base_bdevs_list": [ 00:14:20.672 { 00:14:20.672 "name": "BaseBdev1", 00:14:20.672 "uuid": "f6f33951-9fe2-4626-910d-d39c21395e97", 00:14:20.672 "is_configured": true, 00:14:20.672 "data_offset": 2048, 00:14:20.672 "data_size": 63488 00:14:20.672 }, 00:14:20.672 { 00:14:20.672 "name": null, 00:14:20.672 "uuid": "8156db4d-6e51-4c95-be3e-aa677a0f2e37", 00:14:20.672 "is_configured": false, 00:14:20.672 "data_offset": 0, 00:14:20.672 "data_size": 63488 00:14:20.672 }, 00:14:20.672 { 00:14:20.672 "name": null, 00:14:20.672 "uuid": "d6c141af-0503-454b-8689-0617507392b8", 00:14:20.672 "is_configured": false, 00:14:20.672 "data_offset": 0, 00:14:20.672 "data_size": 63488 00:14:20.672 } 00:14:20.672 ] 00:14:20.672 }' 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.672 13:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.238 [2024-12-06 13:09:08.099443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.238 "name": "Existed_Raid", 00:14:21.238 "uuid": "b71e3d41-e670-4508-becd-0f0d7d0e8d17", 00:14:21.238 "strip_size_kb": 64, 00:14:21.238 "state": "configuring", 00:14:21.238 "raid_level": "raid0", 00:14:21.238 "superblock": true, 00:14:21.238 "num_base_bdevs": 3, 00:14:21.238 "num_base_bdevs_discovered": 2, 00:14:21.238 "num_base_bdevs_operational": 3, 00:14:21.238 "base_bdevs_list": [ 00:14:21.238 { 00:14:21.238 "name": "BaseBdev1", 00:14:21.238 "uuid": "f6f33951-9fe2-4626-910d-d39c21395e97", 00:14:21.238 "is_configured": true, 00:14:21.238 "data_offset": 2048, 00:14:21.238 "data_size": 63488 00:14:21.238 }, 00:14:21.238 { 00:14:21.238 "name": null, 00:14:21.238 "uuid": "8156db4d-6e51-4c95-be3e-aa677a0f2e37", 00:14:21.238 "is_configured": false, 00:14:21.238 "data_offset": 0, 00:14:21.238 "data_size": 63488 00:14:21.238 }, 00:14:21.238 { 00:14:21.238 "name": "BaseBdev3", 00:14:21.238 "uuid": "d6c141af-0503-454b-8689-0617507392b8", 00:14:21.238 "is_configured": true, 00:14:21.238 "data_offset": 2048, 00:14:21.238 "data_size": 63488 00:14:21.238 } 00:14:21.238 ] 00:14:21.238 }' 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.238 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.870 [2024-12-06 13:09:08.691676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.870 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.870 "name": "Existed_Raid", 00:14:21.870 "uuid": "b71e3d41-e670-4508-becd-0f0d7d0e8d17", 00:14:21.870 "strip_size_kb": 64, 00:14:21.870 "state": "configuring", 00:14:21.870 "raid_level": "raid0", 00:14:21.870 "superblock": true, 00:14:21.870 "num_base_bdevs": 3, 00:14:21.870 "num_base_bdevs_discovered": 1, 00:14:21.870 "num_base_bdevs_operational": 3, 00:14:21.870 "base_bdevs_list": [ 00:14:21.870 { 00:14:21.870 "name": null, 00:14:21.870 "uuid": "f6f33951-9fe2-4626-910d-d39c21395e97", 00:14:21.870 "is_configured": false, 00:14:21.870 "data_offset": 0, 00:14:21.870 "data_size": 63488 00:14:21.870 }, 00:14:21.870 { 00:14:21.870 "name": null, 00:14:21.870 "uuid": "8156db4d-6e51-4c95-be3e-aa677a0f2e37", 00:14:21.870 "is_configured": false, 00:14:21.870 "data_offset": 0, 00:14:21.870 "data_size": 63488 00:14:21.870 }, 00:14:21.870 { 00:14:21.870 "name": "BaseBdev3", 00:14:21.870 "uuid": "d6c141af-0503-454b-8689-0617507392b8", 00:14:21.871 "is_configured": true, 00:14:21.871 "data_offset": 2048, 00:14:21.871 "data_size": 63488 00:14:21.871 } 00:14:21.871 ] 00:14:21.871 }' 00:14:21.871 13:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.871 13:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.437 [2024-12-06 13:09:09.369261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.437 "name": "Existed_Raid", 00:14:22.437 "uuid": "b71e3d41-e670-4508-becd-0f0d7d0e8d17", 00:14:22.437 "strip_size_kb": 64, 00:14:22.437 "state": "configuring", 00:14:22.437 "raid_level": "raid0", 00:14:22.437 "superblock": true, 00:14:22.437 "num_base_bdevs": 3, 00:14:22.437 "num_base_bdevs_discovered": 2, 00:14:22.437 "num_base_bdevs_operational": 3, 00:14:22.437 "base_bdevs_list": [ 00:14:22.437 { 00:14:22.437 "name": null, 00:14:22.437 "uuid": "f6f33951-9fe2-4626-910d-d39c21395e97", 00:14:22.437 "is_configured": false, 00:14:22.437 "data_offset": 0, 00:14:22.437 "data_size": 63488 00:14:22.437 }, 00:14:22.437 { 00:14:22.437 "name": "BaseBdev2", 00:14:22.437 "uuid": "8156db4d-6e51-4c95-be3e-aa677a0f2e37", 00:14:22.437 "is_configured": true, 00:14:22.437 "data_offset": 2048, 00:14:22.437 "data_size": 63488 00:14:22.437 }, 00:14:22.437 { 00:14:22.437 "name": "BaseBdev3", 00:14:22.437 "uuid": "d6c141af-0503-454b-8689-0617507392b8", 00:14:22.437 "is_configured": true, 00:14:22.437 "data_offset": 2048, 00:14:22.437 "data_size": 63488 00:14:22.437 } 00:14:22.437 ] 00:14:22.437 }' 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.437 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.006 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.006 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.006 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.006 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.006 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.006 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:23.006 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:23.006 13:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.006 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.006 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.006 13:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.006 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f6f33951-9fe2-4626-910d-d39c21395e97 00:14:23.006 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.006 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.264 [2024-12-06 13:09:10.048953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:23.264 [2024-12-06 13:09:10.049239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:23.264 [2024-12-06 13:09:10.049263] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:23.264 NewBaseBdev 00:14:23.264 [2024-12-06 13:09:10.049624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:23.264 [2024-12-06 13:09:10.049832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:23.264 [2024-12-06 13:09:10.049849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:23.264 [2024-12-06 13:09:10.050014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.264 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.264 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:23.264 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:23.264 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.264 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:23.264 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.264 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.264 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.264 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.264 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.265 [ 00:14:23.265 { 00:14:23.265 "name": "NewBaseBdev", 00:14:23.265 "aliases": [ 00:14:23.265 "f6f33951-9fe2-4626-910d-d39c21395e97" 00:14:23.265 ], 00:14:23.265 "product_name": "Malloc disk", 00:14:23.265 "block_size": 512, 00:14:23.265 "num_blocks": 65536, 00:14:23.265 "uuid": "f6f33951-9fe2-4626-910d-d39c21395e97", 00:14:23.265 "assigned_rate_limits": { 00:14:23.265 "rw_ios_per_sec": 0, 00:14:23.265 "rw_mbytes_per_sec": 0, 00:14:23.265 "r_mbytes_per_sec": 0, 00:14:23.265 "w_mbytes_per_sec": 0 00:14:23.265 }, 00:14:23.265 "claimed": true, 00:14:23.265 "claim_type": "exclusive_write", 00:14:23.265 "zoned": false, 00:14:23.265 "supported_io_types": { 00:14:23.265 "read": true, 00:14:23.265 "write": true, 00:14:23.265 "unmap": true, 00:14:23.265 "flush": true, 00:14:23.265 "reset": true, 00:14:23.265 "nvme_admin": false, 00:14:23.265 "nvme_io": false, 00:14:23.265 "nvme_io_md": false, 00:14:23.265 "write_zeroes": true, 00:14:23.265 "zcopy": true, 00:14:23.265 "get_zone_info": false, 00:14:23.265 "zone_management": false, 00:14:23.265 "zone_append": false, 00:14:23.265 "compare": false, 00:14:23.265 "compare_and_write": false, 00:14:23.265 "abort": true, 00:14:23.265 "seek_hole": false, 00:14:23.265 "seek_data": false, 00:14:23.265 "copy": true, 00:14:23.265 "nvme_iov_md": false 00:14:23.265 }, 00:14:23.265 "memory_domains": [ 00:14:23.265 { 00:14:23.265 "dma_device_id": "system", 00:14:23.265 "dma_device_type": 1 00:14:23.265 }, 00:14:23.265 { 00:14:23.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.265 "dma_device_type": 2 00:14:23.265 } 00:14:23.265 ], 00:14:23.265 "driver_specific": {} 00:14:23.265 } 00:14:23.265 ] 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.265 "name": "Existed_Raid", 00:14:23.265 "uuid": "b71e3d41-e670-4508-becd-0f0d7d0e8d17", 00:14:23.265 "strip_size_kb": 64, 00:14:23.265 "state": "online", 00:14:23.265 "raid_level": "raid0", 00:14:23.265 "superblock": true, 00:14:23.265 "num_base_bdevs": 3, 00:14:23.265 "num_base_bdevs_discovered": 3, 00:14:23.265 "num_base_bdevs_operational": 3, 00:14:23.265 "base_bdevs_list": [ 00:14:23.265 { 00:14:23.265 "name": "NewBaseBdev", 00:14:23.265 "uuid": "f6f33951-9fe2-4626-910d-d39c21395e97", 00:14:23.265 "is_configured": true, 00:14:23.265 "data_offset": 2048, 00:14:23.265 "data_size": 63488 00:14:23.265 }, 00:14:23.265 { 00:14:23.265 "name": "BaseBdev2", 00:14:23.265 "uuid": "8156db4d-6e51-4c95-be3e-aa677a0f2e37", 00:14:23.265 "is_configured": true, 00:14:23.265 "data_offset": 2048, 00:14:23.265 "data_size": 63488 00:14:23.265 }, 00:14:23.265 { 00:14:23.265 "name": "BaseBdev3", 00:14:23.265 "uuid": "d6c141af-0503-454b-8689-0617507392b8", 00:14:23.265 "is_configured": true, 00:14:23.265 "data_offset": 2048, 00:14:23.265 "data_size": 63488 00:14:23.265 } 00:14:23.265 ] 00:14:23.265 }' 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.265 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.829 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:23.829 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:23.829 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:23.829 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:23.829 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:23.829 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:23.829 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:23.829 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:23.829 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 [2024-12-06 13:09:10.613540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:23.830 "name": "Existed_Raid", 00:14:23.830 "aliases": [ 00:14:23.830 "b71e3d41-e670-4508-becd-0f0d7d0e8d17" 00:14:23.830 ], 00:14:23.830 "product_name": "Raid Volume", 00:14:23.830 "block_size": 512, 00:14:23.830 "num_blocks": 190464, 00:14:23.830 "uuid": "b71e3d41-e670-4508-becd-0f0d7d0e8d17", 00:14:23.830 "assigned_rate_limits": { 00:14:23.830 "rw_ios_per_sec": 0, 00:14:23.830 "rw_mbytes_per_sec": 0, 00:14:23.830 "r_mbytes_per_sec": 0, 00:14:23.830 "w_mbytes_per_sec": 0 00:14:23.830 }, 00:14:23.830 "claimed": false, 00:14:23.830 "zoned": false, 00:14:23.830 "supported_io_types": { 00:14:23.830 "read": true, 00:14:23.830 "write": true, 00:14:23.830 "unmap": true, 00:14:23.830 "flush": true, 00:14:23.830 "reset": true, 00:14:23.830 "nvme_admin": false, 00:14:23.830 "nvme_io": false, 00:14:23.830 "nvme_io_md": false, 00:14:23.830 "write_zeroes": true, 00:14:23.830 "zcopy": false, 00:14:23.830 "get_zone_info": false, 00:14:23.830 "zone_management": false, 00:14:23.830 "zone_append": false, 00:14:23.830 "compare": false, 00:14:23.830 "compare_and_write": false, 00:14:23.830 "abort": false, 00:14:23.830 "seek_hole": false, 00:14:23.830 "seek_data": false, 00:14:23.830 "copy": false, 00:14:23.830 "nvme_iov_md": false 00:14:23.830 }, 00:14:23.830 "memory_domains": [ 00:14:23.830 { 00:14:23.830 "dma_device_id": "system", 00:14:23.830 "dma_device_type": 1 00:14:23.830 }, 00:14:23.830 { 00:14:23.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.830 "dma_device_type": 2 00:14:23.830 }, 00:14:23.830 { 00:14:23.830 "dma_device_id": "system", 00:14:23.830 "dma_device_type": 1 00:14:23.830 }, 00:14:23.830 { 00:14:23.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.830 "dma_device_type": 2 00:14:23.830 }, 00:14:23.830 { 00:14:23.830 "dma_device_id": "system", 00:14:23.830 "dma_device_type": 1 00:14:23.830 }, 00:14:23.830 { 00:14:23.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.830 "dma_device_type": 2 00:14:23.830 } 00:14:23.830 ], 00:14:23.830 "driver_specific": { 00:14:23.830 "raid": { 00:14:23.830 "uuid": "b71e3d41-e670-4508-becd-0f0d7d0e8d17", 00:14:23.830 "strip_size_kb": 64, 00:14:23.830 "state": "online", 00:14:23.830 "raid_level": "raid0", 00:14:23.830 "superblock": true, 00:14:23.830 "num_base_bdevs": 3, 00:14:23.830 "num_base_bdevs_discovered": 3, 00:14:23.830 "num_base_bdevs_operational": 3, 00:14:23.830 "base_bdevs_list": [ 00:14:23.830 { 00:14:23.830 "name": "NewBaseBdev", 00:14:23.830 "uuid": "f6f33951-9fe2-4626-910d-d39c21395e97", 00:14:23.830 "is_configured": true, 00:14:23.830 "data_offset": 2048, 00:14:23.830 "data_size": 63488 00:14:23.830 }, 00:14:23.830 { 00:14:23.830 "name": "BaseBdev2", 00:14:23.830 "uuid": "8156db4d-6e51-4c95-be3e-aa677a0f2e37", 00:14:23.830 "is_configured": true, 00:14:23.830 "data_offset": 2048, 00:14:23.830 "data_size": 63488 00:14:23.830 }, 00:14:23.830 { 00:14:23.830 "name": "BaseBdev3", 00:14:23.830 "uuid": "d6c141af-0503-454b-8689-0617507392b8", 00:14:23.830 "is_configured": true, 00:14:23.830 "data_offset": 2048, 00:14:23.830 "data_size": 63488 00:14:23.830 } 00:14:23.830 ] 00:14:23.830 } 00:14:23.830 } 00:14:23.830 }' 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:23.830 BaseBdev2 00:14:23.830 BaseBdev3' 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.830 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.087 [2024-12-06 13:09:10.929235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:24.087 [2024-12-06 13:09:10.929281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.087 [2024-12-06 13:09:10.929372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.087 [2024-12-06 13:09:10.929454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.087 [2024-12-06 13:09:10.929490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64678 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64678 ']' 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64678 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64678 00:14:24.087 killing process with pid 64678 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64678' 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64678 00:14:24.087 [2024-12-06 13:09:10.967976] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.087 13:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64678 00:14:24.345 [2024-12-06 13:09:11.236935] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.720 ************************************ 00:14:25.720 END TEST raid_state_function_test_sb 00:14:25.720 ************************************ 00:14:25.720 13:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:25.720 00:14:25.720 real 0m11.809s 00:14:25.720 user 0m19.457s 00:14:25.720 sys 0m1.735s 00:14:25.720 13:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.720 13:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.720 13:09:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:14:25.720 13:09:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:25.720 13:09:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.720 13:09:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.720 ************************************ 00:14:25.720 START TEST raid_superblock_test 00:14:25.720 ************************************ 00:14:25.720 13:09:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:14:25.720 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:25.720 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:25.720 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:25.720 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:25.720 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:25.720 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:25.720 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:25.720 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:25.720 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65309 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65309 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65309 ']' 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.721 13:09:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.721 [2024-12-06 13:09:12.494638] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:14:25.721 [2024-12-06 13:09:12.494861] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65309 ] 00:14:25.721 [2024-12-06 13:09:12.697978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.989 [2024-12-06 13:09:12.838739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.248 [2024-12-06 13:09:13.046541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.248 [2024-12-06 13:09:13.046688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.506 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.766 malloc1 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.766 [2024-12-06 13:09:13.529631] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:26.766 [2024-12-06 13:09:13.529942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.766 [2024-12-06 13:09:13.529995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:26.766 [2024-12-06 13:09:13.530017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.766 [2024-12-06 13:09:13.533258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.766 [2024-12-06 13:09:13.533470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:26.766 pt1 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.766 malloc2 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.766 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.766 [2024-12-06 13:09:13.591999] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:26.766 [2024-12-06 13:09:13.592111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.767 [2024-12-06 13:09:13.592175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:26.767 [2024-12-06 13:09:13.592193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.767 [2024-12-06 13:09:13.595763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.767 [2024-12-06 13:09:13.595860] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:26.767 pt2 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.767 malloc3 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.767 [2024-12-06 13:09:13.672931] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:26.767 [2024-12-06 13:09:13.673186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.767 [2024-12-06 13:09:13.673265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:26.767 [2024-12-06 13:09:13.673490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.767 [2024-12-06 13:09:13.676617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.767 [2024-12-06 13:09:13.676855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:26.767 pt3 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.767 [2024-12-06 13:09:13.685223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:26.767 [2024-12-06 13:09:13.688186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.767 [2024-12-06 13:09:13.688453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:26.767 [2024-12-06 13:09:13.688728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:26.767 [2024-12-06 13:09:13.688751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:26.767 [2024-12-06 13:09:13.689090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:26.767 [2024-12-06 13:09:13.689327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:26.767 [2024-12-06 13:09:13.689342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:26.767 [2024-12-06 13:09:13.689634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.767 "name": "raid_bdev1", 00:14:26.767 "uuid": "052e677e-2012-4163-9b8e-66a551cf0b9a", 00:14:26.767 "strip_size_kb": 64, 00:14:26.767 "state": "online", 00:14:26.767 "raid_level": "raid0", 00:14:26.767 "superblock": true, 00:14:26.767 "num_base_bdevs": 3, 00:14:26.767 "num_base_bdevs_discovered": 3, 00:14:26.767 "num_base_bdevs_operational": 3, 00:14:26.767 "base_bdevs_list": [ 00:14:26.767 { 00:14:26.767 "name": "pt1", 00:14:26.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.767 "is_configured": true, 00:14:26.767 "data_offset": 2048, 00:14:26.767 "data_size": 63488 00:14:26.767 }, 00:14:26.767 { 00:14:26.767 "name": "pt2", 00:14:26.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.767 "is_configured": true, 00:14:26.767 "data_offset": 2048, 00:14:26.767 "data_size": 63488 00:14:26.767 }, 00:14:26.767 { 00:14:26.767 "name": "pt3", 00:14:26.767 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.767 "is_configured": true, 00:14:26.767 "data_offset": 2048, 00:14:26.767 "data_size": 63488 00:14:26.767 } 00:14:26.767 ] 00:14:26.767 }' 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.767 13:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:27.335 [2024-12-06 13:09:14.246209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:27.335 "name": "raid_bdev1", 00:14:27.335 "aliases": [ 00:14:27.335 "052e677e-2012-4163-9b8e-66a551cf0b9a" 00:14:27.335 ], 00:14:27.335 "product_name": "Raid Volume", 00:14:27.335 "block_size": 512, 00:14:27.335 "num_blocks": 190464, 00:14:27.335 "uuid": "052e677e-2012-4163-9b8e-66a551cf0b9a", 00:14:27.335 "assigned_rate_limits": { 00:14:27.335 "rw_ios_per_sec": 0, 00:14:27.335 "rw_mbytes_per_sec": 0, 00:14:27.335 "r_mbytes_per_sec": 0, 00:14:27.335 "w_mbytes_per_sec": 0 00:14:27.335 }, 00:14:27.335 "claimed": false, 00:14:27.335 "zoned": false, 00:14:27.335 "supported_io_types": { 00:14:27.335 "read": true, 00:14:27.335 "write": true, 00:14:27.335 "unmap": true, 00:14:27.335 "flush": true, 00:14:27.335 "reset": true, 00:14:27.335 "nvme_admin": false, 00:14:27.335 "nvme_io": false, 00:14:27.335 "nvme_io_md": false, 00:14:27.335 "write_zeroes": true, 00:14:27.335 "zcopy": false, 00:14:27.335 "get_zone_info": false, 00:14:27.335 "zone_management": false, 00:14:27.335 "zone_append": false, 00:14:27.335 "compare": false, 00:14:27.335 "compare_and_write": false, 00:14:27.335 "abort": false, 00:14:27.335 "seek_hole": false, 00:14:27.335 "seek_data": false, 00:14:27.335 "copy": false, 00:14:27.335 "nvme_iov_md": false 00:14:27.335 }, 00:14:27.335 "memory_domains": [ 00:14:27.335 { 00:14:27.335 "dma_device_id": "system", 00:14:27.335 "dma_device_type": 1 00:14:27.335 }, 00:14:27.335 { 00:14:27.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.335 "dma_device_type": 2 00:14:27.335 }, 00:14:27.335 { 00:14:27.335 "dma_device_id": "system", 00:14:27.335 "dma_device_type": 1 00:14:27.335 }, 00:14:27.335 { 00:14:27.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.335 "dma_device_type": 2 00:14:27.335 }, 00:14:27.335 { 00:14:27.335 "dma_device_id": "system", 00:14:27.335 "dma_device_type": 1 00:14:27.335 }, 00:14:27.335 { 00:14:27.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.335 "dma_device_type": 2 00:14:27.335 } 00:14:27.335 ], 00:14:27.335 "driver_specific": { 00:14:27.335 "raid": { 00:14:27.335 "uuid": "052e677e-2012-4163-9b8e-66a551cf0b9a", 00:14:27.335 "strip_size_kb": 64, 00:14:27.335 "state": "online", 00:14:27.335 "raid_level": "raid0", 00:14:27.335 "superblock": true, 00:14:27.335 "num_base_bdevs": 3, 00:14:27.335 "num_base_bdevs_discovered": 3, 00:14:27.335 "num_base_bdevs_operational": 3, 00:14:27.335 "base_bdevs_list": [ 00:14:27.335 { 00:14:27.335 "name": "pt1", 00:14:27.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.335 "is_configured": true, 00:14:27.335 "data_offset": 2048, 00:14:27.335 "data_size": 63488 00:14:27.335 }, 00:14:27.335 { 00:14:27.335 "name": "pt2", 00:14:27.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.335 "is_configured": true, 00:14:27.335 "data_offset": 2048, 00:14:27.335 "data_size": 63488 00:14:27.335 }, 00:14:27.335 { 00:14:27.335 "name": "pt3", 00:14:27.335 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.335 "is_configured": true, 00:14:27.335 "data_offset": 2048, 00:14:27.335 "data_size": 63488 00:14:27.335 } 00:14:27.335 ] 00:14:27.335 } 00:14:27.335 } 00:14:27.335 }' 00:14:27.335 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:27.594 pt2 00:14:27.594 pt3' 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:27.594 [2024-12-06 13:09:14.578232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.594 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.853 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=052e677e-2012-4163-9b8e-66a551cf0b9a 00:14:27.853 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 052e677e-2012-4163-9b8e-66a551cf0b9a ']' 00:14:27.853 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.854 [2024-12-06 13:09:14.629939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.854 [2024-12-06 13:09:14.629980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.854 [2024-12-06 13:09:14.630113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.854 [2024-12-06 13:09:14.630200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.854 [2024-12-06 13:09:14.630217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.854 [2024-12-06 13:09:14.773998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:27.854 [2024-12-06 13:09:14.776853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:27.854 [2024-12-06 13:09:14.776931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:27.854 [2024-12-06 13:09:14.777014] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:27.854 [2024-12-06 13:09:14.777094] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:27.854 [2024-12-06 13:09:14.777142] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:27.854 [2024-12-06 13:09:14.777169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.854 [2024-12-06 13:09:14.777186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:27.854 request: 00:14:27.854 { 00:14:27.854 "name": "raid_bdev1", 00:14:27.854 "raid_level": "raid0", 00:14:27.854 "base_bdevs": [ 00:14:27.854 "malloc1", 00:14:27.854 "malloc2", 00:14:27.854 "malloc3" 00:14:27.854 ], 00:14:27.854 "strip_size_kb": 64, 00:14:27.854 "superblock": false, 00:14:27.854 "method": "bdev_raid_create", 00:14:27.854 "req_id": 1 00:14:27.854 } 00:14:27.854 Got JSON-RPC error response 00:14:27.854 response: 00:14:27.854 { 00:14:27.854 "code": -17, 00:14:27.854 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:27.854 } 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.854 [2024-12-06 13:09:14.838031] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.854 [2024-12-06 13:09:14.838253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.854 [2024-12-06 13:09:14.838409] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:27.854 [2024-12-06 13:09:14.838535] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.854 [2024-12-06 13:09:14.841922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.854 [2024-12-06 13:09:14.842087] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.854 [2024-12-06 13:09:14.842323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:27.854 [2024-12-06 13:09:14.842523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.854 pt1 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.854 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.113 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.113 "name": "raid_bdev1", 00:14:28.113 "uuid": "052e677e-2012-4163-9b8e-66a551cf0b9a", 00:14:28.113 "strip_size_kb": 64, 00:14:28.113 "state": "configuring", 00:14:28.113 "raid_level": "raid0", 00:14:28.113 "superblock": true, 00:14:28.113 "num_base_bdevs": 3, 00:14:28.113 "num_base_bdevs_discovered": 1, 00:14:28.113 "num_base_bdevs_operational": 3, 00:14:28.113 "base_bdevs_list": [ 00:14:28.113 { 00:14:28.113 "name": "pt1", 00:14:28.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.113 "is_configured": true, 00:14:28.113 "data_offset": 2048, 00:14:28.113 "data_size": 63488 00:14:28.113 }, 00:14:28.113 { 00:14:28.113 "name": null, 00:14:28.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.113 "is_configured": false, 00:14:28.113 "data_offset": 2048, 00:14:28.113 "data_size": 63488 00:14:28.113 }, 00:14:28.113 { 00:14:28.113 "name": null, 00:14:28.113 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.113 "is_configured": false, 00:14:28.113 "data_offset": 2048, 00:14:28.113 "data_size": 63488 00:14:28.113 } 00:14:28.113 ] 00:14:28.113 }' 00:14:28.113 13:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.113 13:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.371 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:28.371 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:28.371 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.371 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.371 [2024-12-06 13:09:15.378637] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:28.371 [2024-12-06 13:09:15.378745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.371 [2024-12-06 13:09:15.378790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:28.371 [2024-12-06 13:09:15.378807] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.371 [2024-12-06 13:09:15.379449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.371 [2024-12-06 13:09:15.379509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:28.371 [2024-12-06 13:09:15.379637] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:28.371 [2024-12-06 13:09:15.379679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.371 pt2 00:14:28.371 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.371 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:28.371 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.371 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.629 [2024-12-06 13:09:15.386565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.629 "name": "raid_bdev1", 00:14:28.629 "uuid": "052e677e-2012-4163-9b8e-66a551cf0b9a", 00:14:28.629 "strip_size_kb": 64, 00:14:28.629 "state": "configuring", 00:14:28.629 "raid_level": "raid0", 00:14:28.629 "superblock": true, 00:14:28.629 "num_base_bdevs": 3, 00:14:28.629 "num_base_bdevs_discovered": 1, 00:14:28.629 "num_base_bdevs_operational": 3, 00:14:28.629 "base_bdevs_list": [ 00:14:28.629 { 00:14:28.629 "name": "pt1", 00:14:28.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.629 "is_configured": true, 00:14:28.629 "data_offset": 2048, 00:14:28.629 "data_size": 63488 00:14:28.629 }, 00:14:28.629 { 00:14:28.629 "name": null, 00:14:28.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.629 "is_configured": false, 00:14:28.629 "data_offset": 0, 00:14:28.629 "data_size": 63488 00:14:28.629 }, 00:14:28.629 { 00:14:28.629 "name": null, 00:14:28.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.629 "is_configured": false, 00:14:28.629 "data_offset": 2048, 00:14:28.629 "data_size": 63488 00:14:28.629 } 00:14:28.629 ] 00:14:28.629 }' 00:14:28.629 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.630 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.196 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:29.196 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.196 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.196 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.196 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.196 [2024-12-06 13:09:15.922729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.196 [2024-12-06 13:09:15.922997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.196 [2024-12-06 13:09:15.923038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:29.196 [2024-12-06 13:09:15.923069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.196 [2024-12-06 13:09:15.923781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.196 [2024-12-06 13:09:15.923837] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.196 [2024-12-06 13:09:15.923975] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:29.196 [2024-12-06 13:09:15.924023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.196 pt2 00:14:29.196 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.196 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:29.196 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.196 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:29.196 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.196 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.196 [2024-12-06 13:09:15.934695] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:29.196 [2024-12-06 13:09:15.934766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.196 [2024-12-06 13:09:15.934790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:29.196 [2024-12-06 13:09:15.934807] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.196 [2024-12-06 13:09:15.935306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.196 [2024-12-06 13:09:15.935348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:29.196 [2024-12-06 13:09:15.935427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:29.196 [2024-12-06 13:09:15.935482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:29.196 [2024-12-06 13:09:15.935642] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:29.196 [2024-12-06 13:09:15.935669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:29.197 [2024-12-06 13:09:15.935987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:29.197 [2024-12-06 13:09:15.936209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:29.197 [2024-12-06 13:09:15.936224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:29.197 [2024-12-06 13:09:15.936406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.197 pt3 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.197 "name": "raid_bdev1", 00:14:29.197 "uuid": "052e677e-2012-4163-9b8e-66a551cf0b9a", 00:14:29.197 "strip_size_kb": 64, 00:14:29.197 "state": "online", 00:14:29.197 "raid_level": "raid0", 00:14:29.197 "superblock": true, 00:14:29.197 "num_base_bdevs": 3, 00:14:29.197 "num_base_bdevs_discovered": 3, 00:14:29.197 "num_base_bdevs_operational": 3, 00:14:29.197 "base_bdevs_list": [ 00:14:29.197 { 00:14:29.197 "name": "pt1", 00:14:29.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.197 "is_configured": true, 00:14:29.197 "data_offset": 2048, 00:14:29.197 "data_size": 63488 00:14:29.197 }, 00:14:29.197 { 00:14:29.197 "name": "pt2", 00:14:29.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.197 "is_configured": true, 00:14:29.197 "data_offset": 2048, 00:14:29.197 "data_size": 63488 00:14:29.197 }, 00:14:29.197 { 00:14:29.197 "name": "pt3", 00:14:29.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.197 "is_configured": true, 00:14:29.197 "data_offset": 2048, 00:14:29.197 "data_size": 63488 00:14:29.197 } 00:14:29.197 ] 00:14:29.197 }' 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.197 13:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.455 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:29.455 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:29.455 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.455 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.455 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.455 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.714 [2024-12-06 13:09:16.479373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.714 "name": "raid_bdev1", 00:14:29.714 "aliases": [ 00:14:29.714 "052e677e-2012-4163-9b8e-66a551cf0b9a" 00:14:29.714 ], 00:14:29.714 "product_name": "Raid Volume", 00:14:29.714 "block_size": 512, 00:14:29.714 "num_blocks": 190464, 00:14:29.714 "uuid": "052e677e-2012-4163-9b8e-66a551cf0b9a", 00:14:29.714 "assigned_rate_limits": { 00:14:29.714 "rw_ios_per_sec": 0, 00:14:29.714 "rw_mbytes_per_sec": 0, 00:14:29.714 "r_mbytes_per_sec": 0, 00:14:29.714 "w_mbytes_per_sec": 0 00:14:29.714 }, 00:14:29.714 "claimed": false, 00:14:29.714 "zoned": false, 00:14:29.714 "supported_io_types": { 00:14:29.714 "read": true, 00:14:29.714 "write": true, 00:14:29.714 "unmap": true, 00:14:29.714 "flush": true, 00:14:29.714 "reset": true, 00:14:29.714 "nvme_admin": false, 00:14:29.714 "nvme_io": false, 00:14:29.714 "nvme_io_md": false, 00:14:29.714 "write_zeroes": true, 00:14:29.714 "zcopy": false, 00:14:29.714 "get_zone_info": false, 00:14:29.714 "zone_management": false, 00:14:29.714 "zone_append": false, 00:14:29.714 "compare": false, 00:14:29.714 "compare_and_write": false, 00:14:29.714 "abort": false, 00:14:29.714 "seek_hole": false, 00:14:29.714 "seek_data": false, 00:14:29.714 "copy": false, 00:14:29.714 "nvme_iov_md": false 00:14:29.714 }, 00:14:29.714 "memory_domains": [ 00:14:29.714 { 00:14:29.714 "dma_device_id": "system", 00:14:29.714 "dma_device_type": 1 00:14:29.714 }, 00:14:29.714 { 00:14:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.714 "dma_device_type": 2 00:14:29.714 }, 00:14:29.714 { 00:14:29.714 "dma_device_id": "system", 00:14:29.714 "dma_device_type": 1 00:14:29.714 }, 00:14:29.714 { 00:14:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.714 "dma_device_type": 2 00:14:29.714 }, 00:14:29.714 { 00:14:29.714 "dma_device_id": "system", 00:14:29.714 "dma_device_type": 1 00:14:29.714 }, 00:14:29.714 { 00:14:29.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.714 "dma_device_type": 2 00:14:29.714 } 00:14:29.714 ], 00:14:29.714 "driver_specific": { 00:14:29.714 "raid": { 00:14:29.714 "uuid": "052e677e-2012-4163-9b8e-66a551cf0b9a", 00:14:29.714 "strip_size_kb": 64, 00:14:29.714 "state": "online", 00:14:29.714 "raid_level": "raid0", 00:14:29.714 "superblock": true, 00:14:29.714 "num_base_bdevs": 3, 00:14:29.714 "num_base_bdevs_discovered": 3, 00:14:29.714 "num_base_bdevs_operational": 3, 00:14:29.714 "base_bdevs_list": [ 00:14:29.714 { 00:14:29.714 "name": "pt1", 00:14:29.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.714 "is_configured": true, 00:14:29.714 "data_offset": 2048, 00:14:29.714 "data_size": 63488 00:14:29.714 }, 00:14:29.714 { 00:14:29.714 "name": "pt2", 00:14:29.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.714 "is_configured": true, 00:14:29.714 "data_offset": 2048, 00:14:29.714 "data_size": 63488 00:14:29.714 }, 00:14:29.714 { 00:14:29.714 "name": "pt3", 00:14:29.714 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.714 "is_configured": true, 00:14:29.714 "data_offset": 2048, 00:14:29.714 "data_size": 63488 00:14:29.714 } 00:14:29.714 ] 00:14:29.714 } 00:14:29.714 } 00:14:29.714 }' 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:29.714 pt2 00:14:29.714 pt3' 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:29.714 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.715 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.715 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.715 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.973 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.973 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.973 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.974 [2024-12-06 13:09:16.803442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 052e677e-2012-4163-9b8e-66a551cf0b9a '!=' 052e677e-2012-4163-9b8e-66a551cf0b9a ']' 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65309 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65309 ']' 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65309 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65309 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.974 killing process with pid 65309 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65309' 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65309 00:14:29.974 [2024-12-06 13:09:16.882618] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.974 13:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65309 00:14:29.974 [2024-12-06 13:09:16.882794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.974 [2024-12-06 13:09:16.882884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.974 [2024-12-06 13:09:16.882906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:30.233 [2024-12-06 13:09:17.170013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.610 13:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:31.610 00:14:31.610 real 0m5.929s 00:14:31.610 user 0m8.792s 00:14:31.610 sys 0m0.957s 00:14:31.610 13:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.610 13:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.610 ************************************ 00:14:31.610 END TEST raid_superblock_test 00:14:31.610 ************************************ 00:14:31.610 13:09:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:14:31.610 13:09:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:31.610 13:09:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.610 13:09:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.610 ************************************ 00:14:31.610 START TEST raid_read_error_test 00:14:31.610 ************************************ 00:14:31.610 13:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:14:31.610 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:31.610 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:31.610 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:31.610 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:31.610 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.610 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:31.610 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.inb7cyNDuV 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65574 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65574 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65574 ']' 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.611 13:09:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.611 [2024-12-06 13:09:18.495452] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:14:31.611 [2024-12-06 13:09:18.495687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65574 ] 00:14:31.870 [2024-12-06 13:09:18.684521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.870 [2024-12-06 13:09:18.833153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.128 [2024-12-06 13:09:19.047942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.128 [2024-12-06 13:09:19.048032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.696 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.696 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 BaseBdev1_malloc 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 true 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 [2024-12-06 13:09:19.538851] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:32.697 [2024-12-06 13:09:19.539478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.697 [2024-12-06 13:09:19.539554] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:32.697 [2024-12-06 13:09:19.539577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.697 [2024-12-06 13:09:19.542653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.697 [2024-12-06 13:09:19.542698] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:32.697 BaseBdev1 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 BaseBdev2_malloc 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 true 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 [2024-12-06 13:09:19.608872] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:32.697 [2024-12-06 13:09:19.608948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.697 [2024-12-06 13:09:19.608971] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:32.697 [2024-12-06 13:09:19.608987] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.697 [2024-12-06 13:09:19.611896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.697 [2024-12-06 13:09:19.611936] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:32.697 BaseBdev2 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 BaseBdev3_malloc 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 true 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 [2024-12-06 13:09:19.684924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:32.697 [2024-12-06 13:09:19.685001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.697 [2024-12-06 13:09:19.685028] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:32.697 [2024-12-06 13:09:19.685061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.697 [2024-12-06 13:09:19.687964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.697 [2024-12-06 13:09:19.688006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:32.697 BaseBdev3 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.697 [2024-12-06 13:09:19.697022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.697 [2024-12-06 13:09:19.699560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.697 [2024-12-06 13:09:19.699680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.697 [2024-12-06 13:09:19.699955] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:32.697 [2024-12-06 13:09:19.699983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:32.697 [2024-12-06 13:09:19.700293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:32.697 [2024-12-06 13:09:19.700560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:32.697 [2024-12-06 13:09:19.700584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:32.697 [2024-12-06 13:09:19.700768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.697 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.956 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.956 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.956 "name": "raid_bdev1", 00:14:32.956 "uuid": "41a4a270-4ffd-4b83-bdb4-42871622d721", 00:14:32.956 "strip_size_kb": 64, 00:14:32.956 "state": "online", 00:14:32.956 "raid_level": "raid0", 00:14:32.956 "superblock": true, 00:14:32.956 "num_base_bdevs": 3, 00:14:32.956 "num_base_bdevs_discovered": 3, 00:14:32.956 "num_base_bdevs_operational": 3, 00:14:32.956 "base_bdevs_list": [ 00:14:32.956 { 00:14:32.956 "name": "BaseBdev1", 00:14:32.956 "uuid": "1518c274-769d-55f8-9852-941fa467ab6d", 00:14:32.956 "is_configured": true, 00:14:32.956 "data_offset": 2048, 00:14:32.956 "data_size": 63488 00:14:32.956 }, 00:14:32.956 { 00:14:32.956 "name": "BaseBdev2", 00:14:32.956 "uuid": "72a8aefa-99d0-5b2f-b1c8-f83295f4ed8c", 00:14:32.956 "is_configured": true, 00:14:32.956 "data_offset": 2048, 00:14:32.956 "data_size": 63488 00:14:32.956 }, 00:14:32.956 { 00:14:32.956 "name": "BaseBdev3", 00:14:32.956 "uuid": "6c48fdf2-f0e6-5c66-831e-05bfbaed8eb5", 00:14:32.956 "is_configured": true, 00:14:32.956 "data_offset": 2048, 00:14:32.956 "data_size": 63488 00:14:32.956 } 00:14:32.956 ] 00:14:32.956 }' 00:14:32.956 13:09:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.956 13:09:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.214 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:33.214 13:09:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:33.472 [2024-12-06 13:09:20.366725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.406 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.407 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.407 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.407 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.407 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.407 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.407 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.407 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.407 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.407 "name": "raid_bdev1", 00:14:34.407 "uuid": "41a4a270-4ffd-4b83-bdb4-42871622d721", 00:14:34.407 "strip_size_kb": 64, 00:14:34.407 "state": "online", 00:14:34.407 "raid_level": "raid0", 00:14:34.407 "superblock": true, 00:14:34.407 "num_base_bdevs": 3, 00:14:34.407 "num_base_bdevs_discovered": 3, 00:14:34.407 "num_base_bdevs_operational": 3, 00:14:34.407 "base_bdevs_list": [ 00:14:34.407 { 00:14:34.407 "name": "BaseBdev1", 00:14:34.407 "uuid": "1518c274-769d-55f8-9852-941fa467ab6d", 00:14:34.407 "is_configured": true, 00:14:34.407 "data_offset": 2048, 00:14:34.407 "data_size": 63488 00:14:34.407 }, 00:14:34.407 { 00:14:34.407 "name": "BaseBdev2", 00:14:34.407 "uuid": "72a8aefa-99d0-5b2f-b1c8-f83295f4ed8c", 00:14:34.407 "is_configured": true, 00:14:34.407 "data_offset": 2048, 00:14:34.407 "data_size": 63488 00:14:34.407 }, 00:14:34.407 { 00:14:34.407 "name": "BaseBdev3", 00:14:34.407 "uuid": "6c48fdf2-f0e6-5c66-831e-05bfbaed8eb5", 00:14:34.407 "is_configured": true, 00:14:34.407 "data_offset": 2048, 00:14:34.407 "data_size": 63488 00:14:34.407 } 00:14:34.407 ] 00:14:34.407 }' 00:14:34.407 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.407 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.972 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:34.972 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.972 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.972 [2024-12-06 13:09:21.760859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.972 [2024-12-06 13:09:21.760904] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.973 [2024-12-06 13:09:21.764375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.973 [2024-12-06 13:09:21.764449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.973 [2024-12-06 13:09:21.764520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.973 [2024-12-06 13:09:21.764538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:34.973 { 00:14:34.973 "results": [ 00:14:34.973 { 00:14:34.973 "job": "raid_bdev1", 00:14:34.973 "core_mask": "0x1", 00:14:34.973 "workload": "randrw", 00:14:34.973 "percentage": 50, 00:14:34.973 "status": "finished", 00:14:34.973 "queue_depth": 1, 00:14:34.973 "io_size": 131072, 00:14:34.973 "runtime": 1.391721, 00:14:34.973 "iops": 9553.638983675608, 00:14:34.973 "mibps": 1194.204872959451, 00:14:34.973 "io_failed": 1, 00:14:34.973 "io_timeout": 0, 00:14:34.973 "avg_latency_us": 146.7651306172958, 00:14:34.973 "min_latency_us": 33.74545454545454, 00:14:34.973 "max_latency_us": 1861.8181818181818 00:14:34.973 } 00:14:34.973 ], 00:14:34.973 "core_count": 1 00:14:34.973 } 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65574 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65574 ']' 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65574 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65574 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.973 killing process with pid 65574 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65574' 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65574 00:14:34.973 13:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65574 00:14:34.973 [2024-12-06 13:09:21.797381] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:35.232 [2024-12-06 13:09:22.018623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.607 13:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.inb7cyNDuV 00:14:36.607 13:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:36.607 13:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:36.607 13:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:14:36.607 13:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:36.607 13:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:36.607 13:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:36.607 13:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:14:36.607 00:14:36.607 real 0m4.836s 00:14:36.607 user 0m5.939s 00:14:36.607 sys 0m0.650s 00:14:36.607 13:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.607 ************************************ 00:14:36.607 END TEST raid_read_error_test 00:14:36.607 ************************************ 00:14:36.607 13:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.607 13:09:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:14:36.607 13:09:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:36.607 13:09:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.607 13:09:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.607 ************************************ 00:14:36.607 START TEST raid_write_error_test 00:14:36.607 ************************************ 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mjf1SFTXql 00:14:36.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65720 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65720 00:14:36.607 13:09:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:36.608 13:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65720 ']' 00:14:36.608 13:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.608 13:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.608 13:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.608 13:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.608 13:09:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.608 [2024-12-06 13:09:23.400955] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:14:36.608 [2024-12-06 13:09:23.401448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65720 ] 00:14:36.608 [2024-12-06 13:09:23.587454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.866 [2024-12-06 13:09:23.730218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.123 [2024-12-06 13:09:23.958660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.123 [2024-12-06 13:09:23.959034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.382 BaseBdev1_malloc 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.382 true 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.382 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.382 [2024-12-06 13:09:24.390922] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:37.382 [2024-12-06 13:09:24.391024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.382 [2024-12-06 13:09:24.391055] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:37.382 [2024-12-06 13:09:24.391072] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.382 [2024-12-06 13:09:24.393976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.382 [2024-12-06 13:09:24.394022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:37.641 BaseBdev1 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.641 BaseBdev2_malloc 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.641 true 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.641 [2024-12-06 13:09:24.452895] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:37.641 [2024-12-06 13:09:24.453336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.641 [2024-12-06 13:09:24.453376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:37.641 [2024-12-06 13:09:24.453395] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.641 [2024-12-06 13:09:24.456620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.641 [2024-12-06 13:09:24.456669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:37.641 BaseBdev2 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.641 BaseBdev3_malloc 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.641 true 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.641 [2024-12-06 13:09:24.525278] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:37.641 [2024-12-06 13:09:24.525365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.641 [2024-12-06 13:09:24.525409] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:37.641 [2024-12-06 13:09:24.525428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.641 [2024-12-06 13:09:24.528579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.641 [2024-12-06 13:09:24.528624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:37.641 BaseBdev3 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.641 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.641 [2024-12-06 13:09:24.533494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.641 [2024-12-06 13:09:24.536488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.641 [2024-12-06 13:09:24.536766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.641 [2024-12-06 13:09:24.537202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:37.641 [2024-12-06 13:09:24.537347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:37.641 [2024-12-06 13:09:24.537768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:37.641 [2024-12-06 13:09:24.538071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:37.642 [2024-12-06 13:09:24.538242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:37.642 [2024-12-06 13:09:24.538735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.642 "name": "raid_bdev1", 00:14:37.642 "uuid": "f36e90fc-2e7a-400c-a321-6c0cd330e783", 00:14:37.642 "strip_size_kb": 64, 00:14:37.642 "state": "online", 00:14:37.642 "raid_level": "raid0", 00:14:37.642 "superblock": true, 00:14:37.642 "num_base_bdevs": 3, 00:14:37.642 "num_base_bdevs_discovered": 3, 00:14:37.642 "num_base_bdevs_operational": 3, 00:14:37.642 "base_bdevs_list": [ 00:14:37.642 { 00:14:37.642 "name": "BaseBdev1", 00:14:37.642 "uuid": "9f7b09b9-9d84-5a11-8661-eafdcb4cb080", 00:14:37.642 "is_configured": true, 00:14:37.642 "data_offset": 2048, 00:14:37.642 "data_size": 63488 00:14:37.642 }, 00:14:37.642 { 00:14:37.642 "name": "BaseBdev2", 00:14:37.642 "uuid": "d7199992-ec9a-5264-8355-ebb8f38aff8b", 00:14:37.642 "is_configured": true, 00:14:37.642 "data_offset": 2048, 00:14:37.642 "data_size": 63488 00:14:37.642 }, 00:14:37.642 { 00:14:37.642 "name": "BaseBdev3", 00:14:37.642 "uuid": "bf6f3824-5efa-5753-9dca-a9577ef033c2", 00:14:37.642 "is_configured": true, 00:14:37.642 "data_offset": 2048, 00:14:37.642 "data_size": 63488 00:14:37.642 } 00:14:37.642 ] 00:14:37.642 }' 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.642 13:09:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.244 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:38.244 13:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:38.244 [2024-12-06 13:09:25.172363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.174 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.174 "name": "raid_bdev1", 00:14:39.174 "uuid": "f36e90fc-2e7a-400c-a321-6c0cd330e783", 00:14:39.174 "strip_size_kb": 64, 00:14:39.174 "state": "online", 00:14:39.174 "raid_level": "raid0", 00:14:39.174 "superblock": true, 00:14:39.174 "num_base_bdevs": 3, 00:14:39.174 "num_base_bdevs_discovered": 3, 00:14:39.174 "num_base_bdevs_operational": 3, 00:14:39.174 "base_bdevs_list": [ 00:14:39.174 { 00:14:39.174 "name": "BaseBdev1", 00:14:39.174 "uuid": "9f7b09b9-9d84-5a11-8661-eafdcb4cb080", 00:14:39.174 "is_configured": true, 00:14:39.174 "data_offset": 2048, 00:14:39.174 "data_size": 63488 00:14:39.174 }, 00:14:39.174 { 00:14:39.174 "name": "BaseBdev2", 00:14:39.174 "uuid": "d7199992-ec9a-5264-8355-ebb8f38aff8b", 00:14:39.174 "is_configured": true, 00:14:39.174 "data_offset": 2048, 00:14:39.174 "data_size": 63488 00:14:39.174 }, 00:14:39.174 { 00:14:39.175 "name": "BaseBdev3", 00:14:39.175 "uuid": "bf6f3824-5efa-5753-9dca-a9577ef033c2", 00:14:39.175 "is_configured": true, 00:14:39.175 "data_offset": 2048, 00:14:39.175 "data_size": 63488 00:14:39.175 } 00:14:39.175 ] 00:14:39.175 }' 00:14:39.175 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.175 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.739 [2024-12-06 13:09:26.592654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.739 [2024-12-06 13:09:26.593035] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.739 [2024-12-06 13:09:26.596756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.739 [2024-12-06 13:09:26.597074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.739 [2024-12-06 13:09:26.597162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.739 [2024-12-06 13:09:26.597180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:39.739 { 00:14:39.739 "results": [ 00:14:39.739 { 00:14:39.739 "job": "raid_bdev1", 00:14:39.739 "core_mask": "0x1", 00:14:39.739 "workload": "randrw", 00:14:39.739 "percentage": 50, 00:14:39.739 "status": "finished", 00:14:39.739 "queue_depth": 1, 00:14:39.739 "io_size": 131072, 00:14:39.739 "runtime": 1.418201, 00:14:39.739 "iops": 9790.572704433293, 00:14:39.739 "mibps": 1223.8215880541616, 00:14:39.739 "io_failed": 1, 00:14:39.739 "io_timeout": 0, 00:14:39.739 "avg_latency_us": 143.4180058397601, 00:14:39.739 "min_latency_us": 39.33090909090909, 00:14:39.739 "max_latency_us": 1951.1854545454546 00:14:39.739 } 00:14:39.739 ], 00:14:39.739 "core_count": 1 00:14:39.739 } 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65720 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65720 ']' 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65720 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65720 00:14:39.739 killing process with pid 65720 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65720' 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65720 00:14:39.739 13:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65720 00:14:39.739 [2024-12-06 13:09:26.639230] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.038 [2024-12-06 13:09:26.861315] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:41.429 13:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mjf1SFTXql 00:14:41.429 13:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:41.429 13:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:41.429 13:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:41.429 13:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:41.429 13:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:41.429 13:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:41.429 13:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:41.429 00:14:41.429 real 0m4.791s 00:14:41.429 user 0m5.811s 00:14:41.429 sys 0m0.666s 00:14:41.429 ************************************ 00:14:41.429 END TEST raid_write_error_test 00:14:41.429 ************************************ 00:14:41.429 13:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.429 13:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.429 13:09:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:41.429 13:09:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:14:41.429 13:09:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:41.429 13:09:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.429 13:09:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:41.429 ************************************ 00:14:41.429 START TEST raid_state_function_test 00:14:41.429 ************************************ 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65869 00:14:41.429 Process raid pid: 65869 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65869' 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65869 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65869 ']' 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.429 13:09:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.429 [2024-12-06 13:09:28.209509] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:14:41.429 [2024-12-06 13:09:28.209896] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.429 [2024-12-06 13:09:28.387603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.688 [2024-12-06 13:09:28.530206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.947 [2024-12-06 13:09:28.756085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.947 [2024-12-06 13:09:28.756145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.521 [2024-12-06 13:09:29.260215] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.521 [2024-12-06 13:09:29.260301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.521 [2024-12-06 13:09:29.260318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.521 [2024-12-06 13:09:29.260334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.521 [2024-12-06 13:09:29.260343] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.521 [2024-12-06 13:09:29.260357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.521 "name": "Existed_Raid", 00:14:42.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.521 "strip_size_kb": 64, 00:14:42.521 "state": "configuring", 00:14:42.521 "raid_level": "concat", 00:14:42.521 "superblock": false, 00:14:42.521 "num_base_bdevs": 3, 00:14:42.521 "num_base_bdevs_discovered": 0, 00:14:42.521 "num_base_bdevs_operational": 3, 00:14:42.521 "base_bdevs_list": [ 00:14:42.521 { 00:14:42.521 "name": "BaseBdev1", 00:14:42.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.521 "is_configured": false, 00:14:42.521 "data_offset": 0, 00:14:42.521 "data_size": 0 00:14:42.521 }, 00:14:42.521 { 00:14:42.521 "name": "BaseBdev2", 00:14:42.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.521 "is_configured": false, 00:14:42.521 "data_offset": 0, 00:14:42.521 "data_size": 0 00:14:42.521 }, 00:14:42.521 { 00:14:42.521 "name": "BaseBdev3", 00:14:42.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.521 "is_configured": false, 00:14:42.521 "data_offset": 0, 00:14:42.521 "data_size": 0 00:14:42.521 } 00:14:42.521 ] 00:14:42.521 }' 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.521 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.787 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.787 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.787 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.787 [2024-12-06 13:09:29.792299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.788 [2024-12-06 13:09:29.792357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:42.788 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.788 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:42.788 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.788 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.788 [2024-12-06 13:09:29.800264] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.788 [2024-12-06 13:09:29.800337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.788 [2024-12-06 13:09:29.800352] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.788 [2024-12-06 13:09:29.800367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.788 [2024-12-06 13:09:29.800376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.788 [2024-12-06 13:09:29.800389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.054 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.055 [2024-12-06 13:09:29.848829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.055 BaseBdev1 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.055 [ 00:14:43.055 { 00:14:43.055 "name": "BaseBdev1", 00:14:43.055 "aliases": [ 00:14:43.055 "ba18650b-a2a0-4df5-b670-53ec262ce294" 00:14:43.055 ], 00:14:43.055 "product_name": "Malloc disk", 00:14:43.055 "block_size": 512, 00:14:43.055 "num_blocks": 65536, 00:14:43.055 "uuid": "ba18650b-a2a0-4df5-b670-53ec262ce294", 00:14:43.055 "assigned_rate_limits": { 00:14:43.055 "rw_ios_per_sec": 0, 00:14:43.055 "rw_mbytes_per_sec": 0, 00:14:43.055 "r_mbytes_per_sec": 0, 00:14:43.055 "w_mbytes_per_sec": 0 00:14:43.055 }, 00:14:43.055 "claimed": true, 00:14:43.055 "claim_type": "exclusive_write", 00:14:43.055 "zoned": false, 00:14:43.055 "supported_io_types": { 00:14:43.055 "read": true, 00:14:43.055 "write": true, 00:14:43.055 "unmap": true, 00:14:43.055 "flush": true, 00:14:43.055 "reset": true, 00:14:43.055 "nvme_admin": false, 00:14:43.055 "nvme_io": false, 00:14:43.055 "nvme_io_md": false, 00:14:43.055 "write_zeroes": true, 00:14:43.055 "zcopy": true, 00:14:43.055 "get_zone_info": false, 00:14:43.055 "zone_management": false, 00:14:43.055 "zone_append": false, 00:14:43.055 "compare": false, 00:14:43.055 "compare_and_write": false, 00:14:43.055 "abort": true, 00:14:43.055 "seek_hole": false, 00:14:43.055 "seek_data": false, 00:14:43.055 "copy": true, 00:14:43.055 "nvme_iov_md": false 00:14:43.055 }, 00:14:43.055 "memory_domains": [ 00:14:43.055 { 00:14:43.055 "dma_device_id": "system", 00:14:43.055 "dma_device_type": 1 00:14:43.055 }, 00:14:43.055 { 00:14:43.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.055 "dma_device_type": 2 00:14:43.055 } 00:14:43.055 ], 00:14:43.055 "driver_specific": {} 00:14:43.055 } 00:14:43.055 ] 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.055 "name": "Existed_Raid", 00:14:43.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.055 "strip_size_kb": 64, 00:14:43.055 "state": "configuring", 00:14:43.055 "raid_level": "concat", 00:14:43.055 "superblock": false, 00:14:43.055 "num_base_bdevs": 3, 00:14:43.055 "num_base_bdevs_discovered": 1, 00:14:43.055 "num_base_bdevs_operational": 3, 00:14:43.055 "base_bdevs_list": [ 00:14:43.055 { 00:14:43.055 "name": "BaseBdev1", 00:14:43.055 "uuid": "ba18650b-a2a0-4df5-b670-53ec262ce294", 00:14:43.055 "is_configured": true, 00:14:43.055 "data_offset": 0, 00:14:43.055 "data_size": 65536 00:14:43.055 }, 00:14:43.055 { 00:14:43.055 "name": "BaseBdev2", 00:14:43.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.055 "is_configured": false, 00:14:43.055 "data_offset": 0, 00:14:43.055 "data_size": 0 00:14:43.055 }, 00:14:43.055 { 00:14:43.055 "name": "BaseBdev3", 00:14:43.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.055 "is_configured": false, 00:14:43.055 "data_offset": 0, 00:14:43.055 "data_size": 0 00:14:43.055 } 00:14:43.055 ] 00:14:43.055 }' 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.055 13:09:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.647 [2024-12-06 13:09:30.401082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:43.647 [2024-12-06 13:09:30.401162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.647 [2024-12-06 13:09:30.409093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.647 [2024-12-06 13:09:30.411797] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.647 [2024-12-06 13:09:30.411864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.647 [2024-12-06 13:09:30.411879] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.647 [2024-12-06 13:09:30.411893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.647 "name": "Existed_Raid", 00:14:43.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.647 "strip_size_kb": 64, 00:14:43.647 "state": "configuring", 00:14:43.647 "raid_level": "concat", 00:14:43.647 "superblock": false, 00:14:43.647 "num_base_bdevs": 3, 00:14:43.647 "num_base_bdevs_discovered": 1, 00:14:43.647 "num_base_bdevs_operational": 3, 00:14:43.647 "base_bdevs_list": [ 00:14:43.647 { 00:14:43.647 "name": "BaseBdev1", 00:14:43.647 "uuid": "ba18650b-a2a0-4df5-b670-53ec262ce294", 00:14:43.647 "is_configured": true, 00:14:43.647 "data_offset": 0, 00:14:43.647 "data_size": 65536 00:14:43.647 }, 00:14:43.647 { 00:14:43.647 "name": "BaseBdev2", 00:14:43.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.647 "is_configured": false, 00:14:43.647 "data_offset": 0, 00:14:43.647 "data_size": 0 00:14:43.647 }, 00:14:43.647 { 00:14:43.647 "name": "BaseBdev3", 00:14:43.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.647 "is_configured": false, 00:14:43.647 "data_offset": 0, 00:14:43.647 "data_size": 0 00:14:43.647 } 00:14:43.647 ] 00:14:43.647 }' 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.647 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.922 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:43.922 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.922 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.217 [2024-12-06 13:09:30.972069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.217 BaseBdev2 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.217 13:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.217 [ 00:14:44.217 { 00:14:44.217 "name": "BaseBdev2", 00:14:44.217 "aliases": [ 00:14:44.217 "af6f4de8-e3aa-44b2-bdd8-aac3f89abd18" 00:14:44.217 ], 00:14:44.217 "product_name": "Malloc disk", 00:14:44.217 "block_size": 512, 00:14:44.217 "num_blocks": 65536, 00:14:44.217 "uuid": "af6f4de8-e3aa-44b2-bdd8-aac3f89abd18", 00:14:44.217 "assigned_rate_limits": { 00:14:44.217 "rw_ios_per_sec": 0, 00:14:44.217 "rw_mbytes_per_sec": 0, 00:14:44.217 "r_mbytes_per_sec": 0, 00:14:44.217 "w_mbytes_per_sec": 0 00:14:44.217 }, 00:14:44.217 "claimed": true, 00:14:44.217 "claim_type": "exclusive_write", 00:14:44.217 "zoned": false, 00:14:44.217 "supported_io_types": { 00:14:44.217 "read": true, 00:14:44.217 "write": true, 00:14:44.217 "unmap": true, 00:14:44.217 "flush": true, 00:14:44.217 "reset": true, 00:14:44.217 "nvme_admin": false, 00:14:44.217 "nvme_io": false, 00:14:44.217 "nvme_io_md": false, 00:14:44.217 "write_zeroes": true, 00:14:44.217 "zcopy": true, 00:14:44.217 "get_zone_info": false, 00:14:44.217 "zone_management": false, 00:14:44.217 "zone_append": false, 00:14:44.217 "compare": false, 00:14:44.217 "compare_and_write": false, 00:14:44.217 "abort": true, 00:14:44.217 "seek_hole": false, 00:14:44.217 "seek_data": false, 00:14:44.217 "copy": true, 00:14:44.217 "nvme_iov_md": false 00:14:44.217 }, 00:14:44.217 "memory_domains": [ 00:14:44.217 { 00:14:44.217 "dma_device_id": "system", 00:14:44.217 "dma_device_type": 1 00:14:44.217 }, 00:14:44.217 { 00:14:44.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.217 "dma_device_type": 2 00:14:44.217 } 00:14:44.217 ], 00:14:44.217 "driver_specific": {} 00:14:44.217 } 00:14:44.217 ] 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.218 "name": "Existed_Raid", 00:14:44.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.218 "strip_size_kb": 64, 00:14:44.218 "state": "configuring", 00:14:44.218 "raid_level": "concat", 00:14:44.218 "superblock": false, 00:14:44.218 "num_base_bdevs": 3, 00:14:44.218 "num_base_bdevs_discovered": 2, 00:14:44.218 "num_base_bdevs_operational": 3, 00:14:44.218 "base_bdevs_list": [ 00:14:44.218 { 00:14:44.218 "name": "BaseBdev1", 00:14:44.218 "uuid": "ba18650b-a2a0-4df5-b670-53ec262ce294", 00:14:44.218 "is_configured": true, 00:14:44.218 "data_offset": 0, 00:14:44.218 "data_size": 65536 00:14:44.218 }, 00:14:44.218 { 00:14:44.218 "name": "BaseBdev2", 00:14:44.218 "uuid": "af6f4de8-e3aa-44b2-bdd8-aac3f89abd18", 00:14:44.218 "is_configured": true, 00:14:44.218 "data_offset": 0, 00:14:44.218 "data_size": 65536 00:14:44.218 }, 00:14:44.218 { 00:14:44.218 "name": "BaseBdev3", 00:14:44.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.218 "is_configured": false, 00:14:44.218 "data_offset": 0, 00:14:44.218 "data_size": 0 00:14:44.218 } 00:14:44.218 ] 00:14:44.218 }' 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.218 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.490 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:44.490 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.490 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.749 [2024-12-06 13:09:31.553526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.749 [2024-12-06 13:09:31.553608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:44.749 [2024-12-06 13:09:31.553628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:44.749 [2024-12-06 13:09:31.554052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:44.749 [2024-12-06 13:09:31.554302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:44.749 [2024-12-06 13:09:31.554320] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:44.749 [2024-12-06 13:09:31.554777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.749 BaseBdev3 00:14:44.749 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.749 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.750 [ 00:14:44.750 { 00:14:44.750 "name": "BaseBdev3", 00:14:44.750 "aliases": [ 00:14:44.750 "912f0677-ee71-45f1-b4be-016316fea911" 00:14:44.750 ], 00:14:44.750 "product_name": "Malloc disk", 00:14:44.750 "block_size": 512, 00:14:44.750 "num_blocks": 65536, 00:14:44.750 "uuid": "912f0677-ee71-45f1-b4be-016316fea911", 00:14:44.750 "assigned_rate_limits": { 00:14:44.750 "rw_ios_per_sec": 0, 00:14:44.750 "rw_mbytes_per_sec": 0, 00:14:44.750 "r_mbytes_per_sec": 0, 00:14:44.750 "w_mbytes_per_sec": 0 00:14:44.750 }, 00:14:44.750 "claimed": true, 00:14:44.750 "claim_type": "exclusive_write", 00:14:44.750 "zoned": false, 00:14:44.750 "supported_io_types": { 00:14:44.750 "read": true, 00:14:44.750 "write": true, 00:14:44.750 "unmap": true, 00:14:44.750 "flush": true, 00:14:44.750 "reset": true, 00:14:44.750 "nvme_admin": false, 00:14:44.750 "nvme_io": false, 00:14:44.750 "nvme_io_md": false, 00:14:44.750 "write_zeroes": true, 00:14:44.750 "zcopy": true, 00:14:44.750 "get_zone_info": false, 00:14:44.750 "zone_management": false, 00:14:44.750 "zone_append": false, 00:14:44.750 "compare": false, 00:14:44.750 "compare_and_write": false, 00:14:44.750 "abort": true, 00:14:44.750 "seek_hole": false, 00:14:44.750 "seek_data": false, 00:14:44.750 "copy": true, 00:14:44.750 "nvme_iov_md": false 00:14:44.750 }, 00:14:44.750 "memory_domains": [ 00:14:44.750 { 00:14:44.750 "dma_device_id": "system", 00:14:44.750 "dma_device_type": 1 00:14:44.750 }, 00:14:44.750 { 00:14:44.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.750 "dma_device_type": 2 00:14:44.750 } 00:14:44.750 ], 00:14:44.750 "driver_specific": {} 00:14:44.750 } 00:14:44.750 ] 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.750 "name": "Existed_Raid", 00:14:44.750 "uuid": "2df7575b-aea7-440d-80ff-7a27963f2900", 00:14:44.750 "strip_size_kb": 64, 00:14:44.750 "state": "online", 00:14:44.750 "raid_level": "concat", 00:14:44.750 "superblock": false, 00:14:44.750 "num_base_bdevs": 3, 00:14:44.750 "num_base_bdevs_discovered": 3, 00:14:44.750 "num_base_bdevs_operational": 3, 00:14:44.750 "base_bdevs_list": [ 00:14:44.750 { 00:14:44.750 "name": "BaseBdev1", 00:14:44.750 "uuid": "ba18650b-a2a0-4df5-b670-53ec262ce294", 00:14:44.750 "is_configured": true, 00:14:44.750 "data_offset": 0, 00:14:44.750 "data_size": 65536 00:14:44.750 }, 00:14:44.750 { 00:14:44.750 "name": "BaseBdev2", 00:14:44.750 "uuid": "af6f4de8-e3aa-44b2-bdd8-aac3f89abd18", 00:14:44.750 "is_configured": true, 00:14:44.750 "data_offset": 0, 00:14:44.750 "data_size": 65536 00:14:44.750 }, 00:14:44.750 { 00:14:44.750 "name": "BaseBdev3", 00:14:44.750 "uuid": "912f0677-ee71-45f1-b4be-016316fea911", 00:14:44.750 "is_configured": true, 00:14:44.750 "data_offset": 0, 00:14:44.750 "data_size": 65536 00:14:44.750 } 00:14:44.750 ] 00:14:44.750 }' 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.750 13:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.318 [2024-12-06 13:09:32.118197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.318 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:45.318 "name": "Existed_Raid", 00:14:45.318 "aliases": [ 00:14:45.318 "2df7575b-aea7-440d-80ff-7a27963f2900" 00:14:45.318 ], 00:14:45.318 "product_name": "Raid Volume", 00:14:45.318 "block_size": 512, 00:14:45.318 "num_blocks": 196608, 00:14:45.318 "uuid": "2df7575b-aea7-440d-80ff-7a27963f2900", 00:14:45.319 "assigned_rate_limits": { 00:14:45.319 "rw_ios_per_sec": 0, 00:14:45.319 "rw_mbytes_per_sec": 0, 00:14:45.319 "r_mbytes_per_sec": 0, 00:14:45.319 "w_mbytes_per_sec": 0 00:14:45.319 }, 00:14:45.319 "claimed": false, 00:14:45.319 "zoned": false, 00:14:45.319 "supported_io_types": { 00:14:45.319 "read": true, 00:14:45.319 "write": true, 00:14:45.319 "unmap": true, 00:14:45.319 "flush": true, 00:14:45.319 "reset": true, 00:14:45.319 "nvme_admin": false, 00:14:45.319 "nvme_io": false, 00:14:45.319 "nvme_io_md": false, 00:14:45.319 "write_zeroes": true, 00:14:45.319 "zcopy": false, 00:14:45.319 "get_zone_info": false, 00:14:45.319 "zone_management": false, 00:14:45.319 "zone_append": false, 00:14:45.319 "compare": false, 00:14:45.319 "compare_and_write": false, 00:14:45.319 "abort": false, 00:14:45.319 "seek_hole": false, 00:14:45.319 "seek_data": false, 00:14:45.319 "copy": false, 00:14:45.319 "nvme_iov_md": false 00:14:45.319 }, 00:14:45.319 "memory_domains": [ 00:14:45.319 { 00:14:45.319 "dma_device_id": "system", 00:14:45.319 "dma_device_type": 1 00:14:45.319 }, 00:14:45.319 { 00:14:45.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.319 "dma_device_type": 2 00:14:45.319 }, 00:14:45.319 { 00:14:45.319 "dma_device_id": "system", 00:14:45.319 "dma_device_type": 1 00:14:45.319 }, 00:14:45.319 { 00:14:45.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.319 "dma_device_type": 2 00:14:45.319 }, 00:14:45.319 { 00:14:45.319 "dma_device_id": "system", 00:14:45.319 "dma_device_type": 1 00:14:45.319 }, 00:14:45.319 { 00:14:45.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.319 "dma_device_type": 2 00:14:45.319 } 00:14:45.319 ], 00:14:45.319 "driver_specific": { 00:14:45.319 "raid": { 00:14:45.319 "uuid": "2df7575b-aea7-440d-80ff-7a27963f2900", 00:14:45.319 "strip_size_kb": 64, 00:14:45.319 "state": "online", 00:14:45.319 "raid_level": "concat", 00:14:45.319 "superblock": false, 00:14:45.319 "num_base_bdevs": 3, 00:14:45.319 "num_base_bdevs_discovered": 3, 00:14:45.319 "num_base_bdevs_operational": 3, 00:14:45.319 "base_bdevs_list": [ 00:14:45.319 { 00:14:45.319 "name": "BaseBdev1", 00:14:45.319 "uuid": "ba18650b-a2a0-4df5-b670-53ec262ce294", 00:14:45.319 "is_configured": true, 00:14:45.319 "data_offset": 0, 00:14:45.319 "data_size": 65536 00:14:45.319 }, 00:14:45.319 { 00:14:45.319 "name": "BaseBdev2", 00:14:45.319 "uuid": "af6f4de8-e3aa-44b2-bdd8-aac3f89abd18", 00:14:45.319 "is_configured": true, 00:14:45.319 "data_offset": 0, 00:14:45.319 "data_size": 65536 00:14:45.319 }, 00:14:45.319 { 00:14:45.319 "name": "BaseBdev3", 00:14:45.319 "uuid": "912f0677-ee71-45f1-b4be-016316fea911", 00:14:45.319 "is_configured": true, 00:14:45.319 "data_offset": 0, 00:14:45.319 "data_size": 65536 00:14:45.319 } 00:14:45.319 ] 00:14:45.319 } 00:14:45.319 } 00:14:45.319 }' 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:45.319 BaseBdev2 00:14:45.319 BaseBdev3' 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.319 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.579 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.579 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.579 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.579 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.579 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.579 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:45.579 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.579 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.579 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.579 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.579 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.580 [2024-12-06 13:09:32.437901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.580 [2024-12-06 13:09:32.437938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.580 [2024-12-06 13:09:32.438019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.580 "name": "Existed_Raid", 00:14:45.580 "uuid": "2df7575b-aea7-440d-80ff-7a27963f2900", 00:14:45.580 "strip_size_kb": 64, 00:14:45.580 "state": "offline", 00:14:45.580 "raid_level": "concat", 00:14:45.580 "superblock": false, 00:14:45.580 "num_base_bdevs": 3, 00:14:45.580 "num_base_bdevs_discovered": 2, 00:14:45.580 "num_base_bdevs_operational": 2, 00:14:45.580 "base_bdevs_list": [ 00:14:45.580 { 00:14:45.580 "name": null, 00:14:45.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.580 "is_configured": false, 00:14:45.580 "data_offset": 0, 00:14:45.580 "data_size": 65536 00:14:45.580 }, 00:14:45.580 { 00:14:45.580 "name": "BaseBdev2", 00:14:45.580 "uuid": "af6f4de8-e3aa-44b2-bdd8-aac3f89abd18", 00:14:45.580 "is_configured": true, 00:14:45.580 "data_offset": 0, 00:14:45.580 "data_size": 65536 00:14:45.580 }, 00:14:45.580 { 00:14:45.580 "name": "BaseBdev3", 00:14:45.580 "uuid": "912f0677-ee71-45f1-b4be-016316fea911", 00:14:45.580 "is_configured": true, 00:14:45.580 "data_offset": 0, 00:14:45.580 "data_size": 65536 00:14:45.580 } 00:14:45.580 ] 00:14:45.580 }' 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.580 13:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.148 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.148 [2024-12-06 13:09:33.116196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.406 [2024-12-06 13:09:33.272447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:46.406 [2024-12-06 13:09:33.272572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.406 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.666 BaseBdev2 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.666 [ 00:14:46.666 { 00:14:46.666 "name": "BaseBdev2", 00:14:46.666 "aliases": [ 00:14:46.666 "7914f772-7c71-487f-ba81-6547933197a2" 00:14:46.666 ], 00:14:46.666 "product_name": "Malloc disk", 00:14:46.666 "block_size": 512, 00:14:46.666 "num_blocks": 65536, 00:14:46.666 "uuid": "7914f772-7c71-487f-ba81-6547933197a2", 00:14:46.666 "assigned_rate_limits": { 00:14:46.666 "rw_ios_per_sec": 0, 00:14:46.666 "rw_mbytes_per_sec": 0, 00:14:46.666 "r_mbytes_per_sec": 0, 00:14:46.666 "w_mbytes_per_sec": 0 00:14:46.666 }, 00:14:46.666 "claimed": false, 00:14:46.666 "zoned": false, 00:14:46.666 "supported_io_types": { 00:14:46.666 "read": true, 00:14:46.666 "write": true, 00:14:46.666 "unmap": true, 00:14:46.666 "flush": true, 00:14:46.666 "reset": true, 00:14:46.666 "nvme_admin": false, 00:14:46.666 "nvme_io": false, 00:14:46.666 "nvme_io_md": false, 00:14:46.666 "write_zeroes": true, 00:14:46.666 "zcopy": true, 00:14:46.666 "get_zone_info": false, 00:14:46.666 "zone_management": false, 00:14:46.666 "zone_append": false, 00:14:46.666 "compare": false, 00:14:46.666 "compare_and_write": false, 00:14:46.666 "abort": true, 00:14:46.666 "seek_hole": false, 00:14:46.666 "seek_data": false, 00:14:46.666 "copy": true, 00:14:46.666 "nvme_iov_md": false 00:14:46.666 }, 00:14:46.666 "memory_domains": [ 00:14:46.666 { 00:14:46.666 "dma_device_id": "system", 00:14:46.666 "dma_device_type": 1 00:14:46.666 }, 00:14:46.666 { 00:14:46.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.666 "dma_device_type": 2 00:14:46.666 } 00:14:46.666 ], 00:14:46.666 "driver_specific": {} 00:14:46.666 } 00:14:46.666 ] 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.666 BaseBdev3 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:46.666 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.667 [ 00:14:46.667 { 00:14:46.667 "name": "BaseBdev3", 00:14:46.667 "aliases": [ 00:14:46.667 "e9da2b30-27b7-4b59-a0d5-7102c036b471" 00:14:46.667 ], 00:14:46.667 "product_name": "Malloc disk", 00:14:46.667 "block_size": 512, 00:14:46.667 "num_blocks": 65536, 00:14:46.667 "uuid": "e9da2b30-27b7-4b59-a0d5-7102c036b471", 00:14:46.667 "assigned_rate_limits": { 00:14:46.667 "rw_ios_per_sec": 0, 00:14:46.667 "rw_mbytes_per_sec": 0, 00:14:46.667 "r_mbytes_per_sec": 0, 00:14:46.667 "w_mbytes_per_sec": 0 00:14:46.667 }, 00:14:46.667 "claimed": false, 00:14:46.667 "zoned": false, 00:14:46.667 "supported_io_types": { 00:14:46.667 "read": true, 00:14:46.667 "write": true, 00:14:46.667 "unmap": true, 00:14:46.667 "flush": true, 00:14:46.667 "reset": true, 00:14:46.667 "nvme_admin": false, 00:14:46.667 "nvme_io": false, 00:14:46.667 "nvme_io_md": false, 00:14:46.667 "write_zeroes": true, 00:14:46.667 "zcopy": true, 00:14:46.667 "get_zone_info": false, 00:14:46.667 "zone_management": false, 00:14:46.667 "zone_append": false, 00:14:46.667 "compare": false, 00:14:46.667 "compare_and_write": false, 00:14:46.667 "abort": true, 00:14:46.667 "seek_hole": false, 00:14:46.667 "seek_data": false, 00:14:46.667 "copy": true, 00:14:46.667 "nvme_iov_md": false 00:14:46.667 }, 00:14:46.667 "memory_domains": [ 00:14:46.667 { 00:14:46.667 "dma_device_id": "system", 00:14:46.667 "dma_device_type": 1 00:14:46.667 }, 00:14:46.667 { 00:14:46.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.667 "dma_device_type": 2 00:14:46.667 } 00:14:46.667 ], 00:14:46.667 "driver_specific": {} 00:14:46.667 } 00:14:46.667 ] 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.667 [2024-12-06 13:09:33.584566] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.667 [2024-12-06 13:09:33.584625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.667 [2024-12-06 13:09:33.584659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.667 [2024-12-06 13:09:33.587241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.667 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.667 "name": "Existed_Raid", 00:14:46.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.667 "strip_size_kb": 64, 00:14:46.667 "state": "configuring", 00:14:46.667 "raid_level": "concat", 00:14:46.667 "superblock": false, 00:14:46.667 "num_base_bdevs": 3, 00:14:46.667 "num_base_bdevs_discovered": 2, 00:14:46.667 "num_base_bdevs_operational": 3, 00:14:46.667 "base_bdevs_list": [ 00:14:46.667 { 00:14:46.667 "name": "BaseBdev1", 00:14:46.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.667 "is_configured": false, 00:14:46.667 "data_offset": 0, 00:14:46.667 "data_size": 0 00:14:46.667 }, 00:14:46.667 { 00:14:46.667 "name": "BaseBdev2", 00:14:46.667 "uuid": "7914f772-7c71-487f-ba81-6547933197a2", 00:14:46.667 "is_configured": true, 00:14:46.667 "data_offset": 0, 00:14:46.668 "data_size": 65536 00:14:46.668 }, 00:14:46.668 { 00:14:46.668 "name": "BaseBdev3", 00:14:46.668 "uuid": "e9da2b30-27b7-4b59-a0d5-7102c036b471", 00:14:46.668 "is_configured": true, 00:14:46.668 "data_offset": 0, 00:14:46.668 "data_size": 65536 00:14:46.668 } 00:14:46.668 ] 00:14:46.668 }' 00:14:46.668 13:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.668 13:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.234 [2024-12-06 13:09:34.112805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.234 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.235 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.235 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.235 "name": "Existed_Raid", 00:14:47.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.235 "strip_size_kb": 64, 00:14:47.235 "state": "configuring", 00:14:47.235 "raid_level": "concat", 00:14:47.235 "superblock": false, 00:14:47.235 "num_base_bdevs": 3, 00:14:47.235 "num_base_bdevs_discovered": 1, 00:14:47.235 "num_base_bdevs_operational": 3, 00:14:47.235 "base_bdevs_list": [ 00:14:47.235 { 00:14:47.235 "name": "BaseBdev1", 00:14:47.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.235 "is_configured": false, 00:14:47.235 "data_offset": 0, 00:14:47.235 "data_size": 0 00:14:47.235 }, 00:14:47.235 { 00:14:47.235 "name": null, 00:14:47.235 "uuid": "7914f772-7c71-487f-ba81-6547933197a2", 00:14:47.235 "is_configured": false, 00:14:47.235 "data_offset": 0, 00:14:47.235 "data_size": 65536 00:14:47.235 }, 00:14:47.235 { 00:14:47.235 "name": "BaseBdev3", 00:14:47.235 "uuid": "e9da2b30-27b7-4b59-a0d5-7102c036b471", 00:14:47.235 "is_configured": true, 00:14:47.235 "data_offset": 0, 00:14:47.235 "data_size": 65536 00:14:47.235 } 00:14:47.235 ] 00:14:47.235 }' 00:14:47.235 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.235 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.802 [2024-12-06 13:09:34.710769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.802 BaseBdev1 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.802 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.802 [ 00:14:47.802 { 00:14:47.802 "name": "BaseBdev1", 00:14:47.802 "aliases": [ 00:14:47.802 "99a8c216-6e0e-4756-825b-04bfdfe39a22" 00:14:47.802 ], 00:14:47.802 "product_name": "Malloc disk", 00:14:47.802 "block_size": 512, 00:14:47.802 "num_blocks": 65536, 00:14:47.802 "uuid": "99a8c216-6e0e-4756-825b-04bfdfe39a22", 00:14:47.802 "assigned_rate_limits": { 00:14:47.802 "rw_ios_per_sec": 0, 00:14:47.802 "rw_mbytes_per_sec": 0, 00:14:47.802 "r_mbytes_per_sec": 0, 00:14:47.802 "w_mbytes_per_sec": 0 00:14:47.802 }, 00:14:47.802 "claimed": true, 00:14:47.802 "claim_type": "exclusive_write", 00:14:47.802 "zoned": false, 00:14:47.802 "supported_io_types": { 00:14:47.802 "read": true, 00:14:47.802 "write": true, 00:14:47.802 "unmap": true, 00:14:47.802 "flush": true, 00:14:47.802 "reset": true, 00:14:47.802 "nvme_admin": false, 00:14:47.802 "nvme_io": false, 00:14:47.802 "nvme_io_md": false, 00:14:47.802 "write_zeroes": true, 00:14:47.802 "zcopy": true, 00:14:47.802 "get_zone_info": false, 00:14:47.802 "zone_management": false, 00:14:47.802 "zone_append": false, 00:14:47.803 "compare": false, 00:14:47.803 "compare_and_write": false, 00:14:47.803 "abort": true, 00:14:47.803 "seek_hole": false, 00:14:47.803 "seek_data": false, 00:14:47.803 "copy": true, 00:14:47.803 "nvme_iov_md": false 00:14:47.803 }, 00:14:47.803 "memory_domains": [ 00:14:47.803 { 00:14:47.803 "dma_device_id": "system", 00:14:47.803 "dma_device_type": 1 00:14:47.803 }, 00:14:47.803 { 00:14:47.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.803 "dma_device_type": 2 00:14:47.803 } 00:14:47.803 ], 00:14:47.803 "driver_specific": {} 00:14:47.803 } 00:14:47.803 ] 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.803 "name": "Existed_Raid", 00:14:47.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.803 "strip_size_kb": 64, 00:14:47.803 "state": "configuring", 00:14:47.803 "raid_level": "concat", 00:14:47.803 "superblock": false, 00:14:47.803 "num_base_bdevs": 3, 00:14:47.803 "num_base_bdevs_discovered": 2, 00:14:47.803 "num_base_bdevs_operational": 3, 00:14:47.803 "base_bdevs_list": [ 00:14:47.803 { 00:14:47.803 "name": "BaseBdev1", 00:14:47.803 "uuid": "99a8c216-6e0e-4756-825b-04bfdfe39a22", 00:14:47.803 "is_configured": true, 00:14:47.803 "data_offset": 0, 00:14:47.803 "data_size": 65536 00:14:47.803 }, 00:14:47.803 { 00:14:47.803 "name": null, 00:14:47.803 "uuid": "7914f772-7c71-487f-ba81-6547933197a2", 00:14:47.803 "is_configured": false, 00:14:47.803 "data_offset": 0, 00:14:47.803 "data_size": 65536 00:14:47.803 }, 00:14:47.803 { 00:14:47.803 "name": "BaseBdev3", 00:14:47.803 "uuid": "e9da2b30-27b7-4b59-a0d5-7102c036b471", 00:14:47.803 "is_configured": true, 00:14:47.803 "data_offset": 0, 00:14:47.803 "data_size": 65536 00:14:47.803 } 00:14:47.803 ] 00:14:47.803 }' 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.803 13:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.370 [2024-12-06 13:09:35.299149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.370 "name": "Existed_Raid", 00:14:48.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.370 "strip_size_kb": 64, 00:14:48.370 "state": "configuring", 00:14:48.370 "raid_level": "concat", 00:14:48.370 "superblock": false, 00:14:48.370 "num_base_bdevs": 3, 00:14:48.370 "num_base_bdevs_discovered": 1, 00:14:48.370 "num_base_bdevs_operational": 3, 00:14:48.370 "base_bdevs_list": [ 00:14:48.370 { 00:14:48.370 "name": "BaseBdev1", 00:14:48.370 "uuid": "99a8c216-6e0e-4756-825b-04bfdfe39a22", 00:14:48.370 "is_configured": true, 00:14:48.370 "data_offset": 0, 00:14:48.370 "data_size": 65536 00:14:48.370 }, 00:14:48.370 { 00:14:48.370 "name": null, 00:14:48.370 "uuid": "7914f772-7c71-487f-ba81-6547933197a2", 00:14:48.370 "is_configured": false, 00:14:48.370 "data_offset": 0, 00:14:48.370 "data_size": 65536 00:14:48.370 }, 00:14:48.370 { 00:14:48.370 "name": null, 00:14:48.370 "uuid": "e9da2b30-27b7-4b59-a0d5-7102c036b471", 00:14:48.370 "is_configured": false, 00:14:48.370 "data_offset": 0, 00:14:48.370 "data_size": 65536 00:14:48.370 } 00:14:48.370 ] 00:14:48.370 }' 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.370 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.936 [2024-12-06 13:09:35.847246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.936 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.937 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.937 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.937 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.937 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.937 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.937 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.937 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.937 "name": "Existed_Raid", 00:14:48.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.937 "strip_size_kb": 64, 00:14:48.937 "state": "configuring", 00:14:48.937 "raid_level": "concat", 00:14:48.937 "superblock": false, 00:14:48.937 "num_base_bdevs": 3, 00:14:48.937 "num_base_bdevs_discovered": 2, 00:14:48.937 "num_base_bdevs_operational": 3, 00:14:48.937 "base_bdevs_list": [ 00:14:48.937 { 00:14:48.937 "name": "BaseBdev1", 00:14:48.937 "uuid": "99a8c216-6e0e-4756-825b-04bfdfe39a22", 00:14:48.937 "is_configured": true, 00:14:48.937 "data_offset": 0, 00:14:48.937 "data_size": 65536 00:14:48.937 }, 00:14:48.937 { 00:14:48.937 "name": null, 00:14:48.937 "uuid": "7914f772-7c71-487f-ba81-6547933197a2", 00:14:48.937 "is_configured": false, 00:14:48.937 "data_offset": 0, 00:14:48.937 "data_size": 65536 00:14:48.937 }, 00:14:48.937 { 00:14:48.937 "name": "BaseBdev3", 00:14:48.937 "uuid": "e9da2b30-27b7-4b59-a0d5-7102c036b471", 00:14:48.937 "is_configured": true, 00:14:48.937 "data_offset": 0, 00:14:48.937 "data_size": 65536 00:14:48.937 } 00:14:48.937 ] 00:14:48.937 }' 00:14:48.937 13:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.937 13:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.503 [2024-12-06 13:09:36.407515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.503 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.761 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.761 "name": "Existed_Raid", 00:14:49.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.761 "strip_size_kb": 64, 00:14:49.761 "state": "configuring", 00:14:49.761 "raid_level": "concat", 00:14:49.761 "superblock": false, 00:14:49.761 "num_base_bdevs": 3, 00:14:49.761 "num_base_bdevs_discovered": 1, 00:14:49.761 "num_base_bdevs_operational": 3, 00:14:49.761 "base_bdevs_list": [ 00:14:49.761 { 00:14:49.761 "name": null, 00:14:49.761 "uuid": "99a8c216-6e0e-4756-825b-04bfdfe39a22", 00:14:49.761 "is_configured": false, 00:14:49.761 "data_offset": 0, 00:14:49.761 "data_size": 65536 00:14:49.761 }, 00:14:49.761 { 00:14:49.761 "name": null, 00:14:49.761 "uuid": "7914f772-7c71-487f-ba81-6547933197a2", 00:14:49.761 "is_configured": false, 00:14:49.761 "data_offset": 0, 00:14:49.761 "data_size": 65536 00:14:49.761 }, 00:14:49.761 { 00:14:49.761 "name": "BaseBdev3", 00:14:49.761 "uuid": "e9da2b30-27b7-4b59-a0d5-7102c036b471", 00:14:49.761 "is_configured": true, 00:14:49.761 "data_offset": 0, 00:14:49.761 "data_size": 65536 00:14:49.761 } 00:14:49.761 ] 00:14:49.761 }' 00:14:49.761 13:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.761 13:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.326 [2024-12-06 13:09:37.098458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.326 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.327 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.327 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.327 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.327 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.327 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.327 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.327 "name": "Existed_Raid", 00:14:50.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.327 "strip_size_kb": 64, 00:14:50.327 "state": "configuring", 00:14:50.327 "raid_level": "concat", 00:14:50.327 "superblock": false, 00:14:50.327 "num_base_bdevs": 3, 00:14:50.327 "num_base_bdevs_discovered": 2, 00:14:50.327 "num_base_bdevs_operational": 3, 00:14:50.327 "base_bdevs_list": [ 00:14:50.327 { 00:14:50.327 "name": null, 00:14:50.327 "uuid": "99a8c216-6e0e-4756-825b-04bfdfe39a22", 00:14:50.327 "is_configured": false, 00:14:50.327 "data_offset": 0, 00:14:50.327 "data_size": 65536 00:14:50.327 }, 00:14:50.327 { 00:14:50.327 "name": "BaseBdev2", 00:14:50.327 "uuid": "7914f772-7c71-487f-ba81-6547933197a2", 00:14:50.327 "is_configured": true, 00:14:50.327 "data_offset": 0, 00:14:50.327 "data_size": 65536 00:14:50.327 }, 00:14:50.327 { 00:14:50.327 "name": "BaseBdev3", 00:14:50.327 "uuid": "e9da2b30-27b7-4b59-a0d5-7102c036b471", 00:14:50.327 "is_configured": true, 00:14:50.327 "data_offset": 0, 00:14:50.327 "data_size": 65536 00:14:50.327 } 00:14:50.327 ] 00:14:50.327 }' 00:14:50.327 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.327 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 99a8c216-6e0e-4756-825b-04bfdfe39a22 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.893 [2024-12-06 13:09:37.769429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:50.893 [2024-12-06 13:09:37.769518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:50.893 [2024-12-06 13:09:37.769536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:50.893 [2024-12-06 13:09:37.769852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:50.893 [2024-12-06 13:09:37.770049] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:50.893 [2024-12-06 13:09:37.770064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:50.893 [2024-12-06 13:09:37.770375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.893 NewBaseBdev 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.893 [ 00:14:50.893 { 00:14:50.893 "name": "NewBaseBdev", 00:14:50.893 "aliases": [ 00:14:50.893 "99a8c216-6e0e-4756-825b-04bfdfe39a22" 00:14:50.893 ], 00:14:50.893 "product_name": "Malloc disk", 00:14:50.893 "block_size": 512, 00:14:50.893 "num_blocks": 65536, 00:14:50.893 "uuid": "99a8c216-6e0e-4756-825b-04bfdfe39a22", 00:14:50.893 "assigned_rate_limits": { 00:14:50.893 "rw_ios_per_sec": 0, 00:14:50.893 "rw_mbytes_per_sec": 0, 00:14:50.893 "r_mbytes_per_sec": 0, 00:14:50.893 "w_mbytes_per_sec": 0 00:14:50.893 }, 00:14:50.893 "claimed": true, 00:14:50.893 "claim_type": "exclusive_write", 00:14:50.893 "zoned": false, 00:14:50.893 "supported_io_types": { 00:14:50.893 "read": true, 00:14:50.893 "write": true, 00:14:50.893 "unmap": true, 00:14:50.893 "flush": true, 00:14:50.893 "reset": true, 00:14:50.893 "nvme_admin": false, 00:14:50.893 "nvme_io": false, 00:14:50.893 "nvme_io_md": false, 00:14:50.893 "write_zeroes": true, 00:14:50.893 "zcopy": true, 00:14:50.893 "get_zone_info": false, 00:14:50.893 "zone_management": false, 00:14:50.893 "zone_append": false, 00:14:50.893 "compare": false, 00:14:50.893 "compare_and_write": false, 00:14:50.893 "abort": true, 00:14:50.893 "seek_hole": false, 00:14:50.893 "seek_data": false, 00:14:50.893 "copy": true, 00:14:50.893 "nvme_iov_md": false 00:14:50.893 }, 00:14:50.893 "memory_domains": [ 00:14:50.893 { 00:14:50.893 "dma_device_id": "system", 00:14:50.893 "dma_device_type": 1 00:14:50.893 }, 00:14:50.893 { 00:14:50.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.893 "dma_device_type": 2 00:14:50.893 } 00:14:50.893 ], 00:14:50.893 "driver_specific": {} 00:14:50.893 } 00:14:50.893 ] 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:50.893 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.894 "name": "Existed_Raid", 00:14:50.894 "uuid": "ec08a42e-9ace-46ac-b6da-ef3edb8b3139", 00:14:50.894 "strip_size_kb": 64, 00:14:50.894 "state": "online", 00:14:50.894 "raid_level": "concat", 00:14:50.894 "superblock": false, 00:14:50.894 "num_base_bdevs": 3, 00:14:50.894 "num_base_bdevs_discovered": 3, 00:14:50.894 "num_base_bdevs_operational": 3, 00:14:50.894 "base_bdevs_list": [ 00:14:50.894 { 00:14:50.894 "name": "NewBaseBdev", 00:14:50.894 "uuid": "99a8c216-6e0e-4756-825b-04bfdfe39a22", 00:14:50.894 "is_configured": true, 00:14:50.894 "data_offset": 0, 00:14:50.894 "data_size": 65536 00:14:50.894 }, 00:14:50.894 { 00:14:50.894 "name": "BaseBdev2", 00:14:50.894 "uuid": "7914f772-7c71-487f-ba81-6547933197a2", 00:14:50.894 "is_configured": true, 00:14:50.894 "data_offset": 0, 00:14:50.894 "data_size": 65536 00:14:50.894 }, 00:14:50.894 { 00:14:50.894 "name": "BaseBdev3", 00:14:50.894 "uuid": "e9da2b30-27b7-4b59-a0d5-7102c036b471", 00:14:50.894 "is_configured": true, 00:14:50.894 "data_offset": 0, 00:14:50.894 "data_size": 65536 00:14:50.894 } 00:14:50.894 ] 00:14:50.894 }' 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.894 13:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.460 [2024-12-06 13:09:38.346100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:51.460 "name": "Existed_Raid", 00:14:51.460 "aliases": [ 00:14:51.460 "ec08a42e-9ace-46ac-b6da-ef3edb8b3139" 00:14:51.460 ], 00:14:51.460 "product_name": "Raid Volume", 00:14:51.460 "block_size": 512, 00:14:51.460 "num_blocks": 196608, 00:14:51.460 "uuid": "ec08a42e-9ace-46ac-b6da-ef3edb8b3139", 00:14:51.460 "assigned_rate_limits": { 00:14:51.460 "rw_ios_per_sec": 0, 00:14:51.460 "rw_mbytes_per_sec": 0, 00:14:51.460 "r_mbytes_per_sec": 0, 00:14:51.460 "w_mbytes_per_sec": 0 00:14:51.460 }, 00:14:51.460 "claimed": false, 00:14:51.460 "zoned": false, 00:14:51.460 "supported_io_types": { 00:14:51.460 "read": true, 00:14:51.460 "write": true, 00:14:51.460 "unmap": true, 00:14:51.460 "flush": true, 00:14:51.460 "reset": true, 00:14:51.460 "nvme_admin": false, 00:14:51.460 "nvme_io": false, 00:14:51.460 "nvme_io_md": false, 00:14:51.460 "write_zeroes": true, 00:14:51.460 "zcopy": false, 00:14:51.460 "get_zone_info": false, 00:14:51.460 "zone_management": false, 00:14:51.460 "zone_append": false, 00:14:51.460 "compare": false, 00:14:51.460 "compare_and_write": false, 00:14:51.460 "abort": false, 00:14:51.460 "seek_hole": false, 00:14:51.460 "seek_data": false, 00:14:51.460 "copy": false, 00:14:51.460 "nvme_iov_md": false 00:14:51.460 }, 00:14:51.460 "memory_domains": [ 00:14:51.460 { 00:14:51.460 "dma_device_id": "system", 00:14:51.460 "dma_device_type": 1 00:14:51.460 }, 00:14:51.460 { 00:14:51.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.460 "dma_device_type": 2 00:14:51.460 }, 00:14:51.460 { 00:14:51.460 "dma_device_id": "system", 00:14:51.460 "dma_device_type": 1 00:14:51.460 }, 00:14:51.460 { 00:14:51.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.460 "dma_device_type": 2 00:14:51.460 }, 00:14:51.460 { 00:14:51.460 "dma_device_id": "system", 00:14:51.460 "dma_device_type": 1 00:14:51.460 }, 00:14:51.460 { 00:14:51.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.460 "dma_device_type": 2 00:14:51.460 } 00:14:51.460 ], 00:14:51.460 "driver_specific": { 00:14:51.460 "raid": { 00:14:51.460 "uuid": "ec08a42e-9ace-46ac-b6da-ef3edb8b3139", 00:14:51.460 "strip_size_kb": 64, 00:14:51.460 "state": "online", 00:14:51.460 "raid_level": "concat", 00:14:51.460 "superblock": false, 00:14:51.460 "num_base_bdevs": 3, 00:14:51.460 "num_base_bdevs_discovered": 3, 00:14:51.460 "num_base_bdevs_operational": 3, 00:14:51.460 "base_bdevs_list": [ 00:14:51.460 { 00:14:51.460 "name": "NewBaseBdev", 00:14:51.460 "uuid": "99a8c216-6e0e-4756-825b-04bfdfe39a22", 00:14:51.460 "is_configured": true, 00:14:51.460 "data_offset": 0, 00:14:51.460 "data_size": 65536 00:14:51.460 }, 00:14:51.460 { 00:14:51.460 "name": "BaseBdev2", 00:14:51.460 "uuid": "7914f772-7c71-487f-ba81-6547933197a2", 00:14:51.460 "is_configured": true, 00:14:51.460 "data_offset": 0, 00:14:51.460 "data_size": 65536 00:14:51.460 }, 00:14:51.460 { 00:14:51.460 "name": "BaseBdev3", 00:14:51.460 "uuid": "e9da2b30-27b7-4b59-a0d5-7102c036b471", 00:14:51.460 "is_configured": true, 00:14:51.460 "data_offset": 0, 00:14:51.460 "data_size": 65536 00:14:51.460 } 00:14:51.460 ] 00:14:51.460 } 00:14:51.460 } 00:14:51.460 }' 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:51.460 BaseBdev2 00:14:51.460 BaseBdev3' 00:14:51.460 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.718 [2024-12-06 13:09:38.653774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.718 [2024-12-06 13:09:38.653827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.718 [2024-12-06 13:09:38.653921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.718 [2024-12-06 13:09:38.653996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.718 [2024-12-06 13:09:38.654016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65869 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65869 ']' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65869 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65869 00:14:51.718 killing process with pid 65869 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65869' 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65869 00:14:51.718 [2024-12-06 13:09:38.693582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.718 13:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65869 00:14:51.975 [2024-12-06 13:09:38.960317] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.345 13:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:53.345 00:14:53.345 real 0m11.911s 00:14:53.345 user 0m19.723s 00:14:53.345 sys 0m1.658s 00:14:53.345 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.345 13:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.345 ************************************ 00:14:53.345 END TEST raid_state_function_test 00:14:53.345 ************************************ 00:14:53.345 13:09:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:14:53.345 13:09:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:53.345 13:09:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.345 13:09:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.345 ************************************ 00:14:53.345 START TEST raid_state_function_test_sb 00:14:53.345 ************************************ 00:14:53.345 13:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:14:53.345 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66501 00:14:53.346 Process raid pid: 66501 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66501' 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66501 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66501 ']' 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.346 13:09:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.346 [2024-12-06 13:09:40.183373] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:14:53.346 [2024-12-06 13:09:40.183559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.346 [2024-12-06 13:09:40.358781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.603 [2024-12-06 13:09:40.486538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.862 [2024-12-06 13:09:40.694253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.862 [2024-12-06 13:09:40.694305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.490 [2024-12-06 13:09:41.180792] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.490 [2024-12-06 13:09:41.180901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.490 [2024-12-06 13:09:41.180919] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.490 [2024-12-06 13:09:41.180937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.490 [2024-12-06 13:09:41.180947] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.490 [2024-12-06 13:09:41.180962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.490 "name": "Existed_Raid", 00:14:54.490 "uuid": "fe7d5e51-5a12-4b2c-913a-86f2c637a4c0", 00:14:54.490 "strip_size_kb": 64, 00:14:54.490 "state": "configuring", 00:14:54.490 "raid_level": "concat", 00:14:54.490 "superblock": true, 00:14:54.490 "num_base_bdevs": 3, 00:14:54.490 "num_base_bdevs_discovered": 0, 00:14:54.490 "num_base_bdevs_operational": 3, 00:14:54.490 "base_bdevs_list": [ 00:14:54.490 { 00:14:54.490 "name": "BaseBdev1", 00:14:54.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.490 "is_configured": false, 00:14:54.490 "data_offset": 0, 00:14:54.490 "data_size": 0 00:14:54.490 }, 00:14:54.490 { 00:14:54.490 "name": "BaseBdev2", 00:14:54.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.490 "is_configured": false, 00:14:54.490 "data_offset": 0, 00:14:54.490 "data_size": 0 00:14:54.490 }, 00:14:54.490 { 00:14:54.490 "name": "BaseBdev3", 00:14:54.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.490 "is_configured": false, 00:14:54.490 "data_offset": 0, 00:14:54.490 "data_size": 0 00:14:54.490 } 00:14:54.490 ] 00:14:54.490 }' 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.490 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.748 [2024-12-06 13:09:41.684892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.748 [2024-12-06 13:09:41.684958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.748 [2024-12-06 13:09:41.696896] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.748 [2024-12-06 13:09:41.697087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.748 [2024-12-06 13:09:41.697263] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.748 [2024-12-06 13:09:41.697327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.748 [2024-12-06 13:09:41.697446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.748 [2024-12-06 13:09:41.697541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.748 [2024-12-06 13:09:41.746343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.748 BaseBdev1 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.748 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.005 [ 00:14:55.005 { 00:14:55.005 "name": "BaseBdev1", 00:14:55.005 "aliases": [ 00:14:55.005 "05fb9166-4a75-4db9-8c6d-232ba4369832" 00:14:55.005 ], 00:14:55.005 "product_name": "Malloc disk", 00:14:55.005 "block_size": 512, 00:14:55.005 "num_blocks": 65536, 00:14:55.005 "uuid": "05fb9166-4a75-4db9-8c6d-232ba4369832", 00:14:55.005 "assigned_rate_limits": { 00:14:55.005 "rw_ios_per_sec": 0, 00:14:55.005 "rw_mbytes_per_sec": 0, 00:14:55.005 "r_mbytes_per_sec": 0, 00:14:55.005 "w_mbytes_per_sec": 0 00:14:55.005 }, 00:14:55.005 "claimed": true, 00:14:55.005 "claim_type": "exclusive_write", 00:14:55.005 "zoned": false, 00:14:55.005 "supported_io_types": { 00:14:55.005 "read": true, 00:14:55.005 "write": true, 00:14:55.005 "unmap": true, 00:14:55.005 "flush": true, 00:14:55.005 "reset": true, 00:14:55.005 "nvme_admin": false, 00:14:55.005 "nvme_io": false, 00:14:55.005 "nvme_io_md": false, 00:14:55.005 "write_zeroes": true, 00:14:55.005 "zcopy": true, 00:14:55.005 "get_zone_info": false, 00:14:55.005 "zone_management": false, 00:14:55.005 "zone_append": false, 00:14:55.005 "compare": false, 00:14:55.005 "compare_and_write": false, 00:14:55.005 "abort": true, 00:14:55.005 "seek_hole": false, 00:14:55.005 "seek_data": false, 00:14:55.005 "copy": true, 00:14:55.005 "nvme_iov_md": false 00:14:55.005 }, 00:14:55.005 "memory_domains": [ 00:14:55.005 { 00:14:55.005 "dma_device_id": "system", 00:14:55.005 "dma_device_type": 1 00:14:55.005 }, 00:14:55.005 { 00:14:55.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.005 "dma_device_type": 2 00:14:55.005 } 00:14:55.005 ], 00:14:55.005 "driver_specific": {} 00:14:55.005 } 00:14:55.005 ] 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.005 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.005 "name": "Existed_Raid", 00:14:55.005 "uuid": "8eb79fa0-3b9d-4959-b1b7-f46e268a8d8e", 00:14:55.006 "strip_size_kb": 64, 00:14:55.006 "state": "configuring", 00:14:55.006 "raid_level": "concat", 00:14:55.006 "superblock": true, 00:14:55.006 "num_base_bdevs": 3, 00:14:55.006 "num_base_bdevs_discovered": 1, 00:14:55.006 "num_base_bdevs_operational": 3, 00:14:55.006 "base_bdevs_list": [ 00:14:55.006 { 00:14:55.006 "name": "BaseBdev1", 00:14:55.006 "uuid": "05fb9166-4a75-4db9-8c6d-232ba4369832", 00:14:55.006 "is_configured": true, 00:14:55.006 "data_offset": 2048, 00:14:55.006 "data_size": 63488 00:14:55.006 }, 00:14:55.006 { 00:14:55.006 "name": "BaseBdev2", 00:14:55.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.006 "is_configured": false, 00:14:55.006 "data_offset": 0, 00:14:55.006 "data_size": 0 00:14:55.006 }, 00:14:55.006 { 00:14:55.006 "name": "BaseBdev3", 00:14:55.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.006 "is_configured": false, 00:14:55.006 "data_offset": 0, 00:14:55.006 "data_size": 0 00:14:55.006 } 00:14:55.006 ] 00:14:55.006 }' 00:14:55.006 13:09:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.006 13:09:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.572 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.572 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.572 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.572 [2024-12-06 13:09:42.346594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.573 [2024-12-06 13:09:42.346710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.573 [2024-12-06 13:09:42.358626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.573 [2024-12-06 13:09:42.361229] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.573 [2024-12-06 13:09:42.361405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.573 [2024-12-06 13:09:42.361541] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.573 [2024-12-06 13:09:42.361677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.573 "name": "Existed_Raid", 00:14:55.573 "uuid": "1de7c015-0a2f-4f1d-b8fa-e59d1b0989ef", 00:14:55.573 "strip_size_kb": 64, 00:14:55.573 "state": "configuring", 00:14:55.573 "raid_level": "concat", 00:14:55.573 "superblock": true, 00:14:55.573 "num_base_bdevs": 3, 00:14:55.573 "num_base_bdevs_discovered": 1, 00:14:55.573 "num_base_bdevs_operational": 3, 00:14:55.573 "base_bdevs_list": [ 00:14:55.573 { 00:14:55.573 "name": "BaseBdev1", 00:14:55.573 "uuid": "05fb9166-4a75-4db9-8c6d-232ba4369832", 00:14:55.573 "is_configured": true, 00:14:55.573 "data_offset": 2048, 00:14:55.573 "data_size": 63488 00:14:55.573 }, 00:14:55.573 { 00:14:55.573 "name": "BaseBdev2", 00:14:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.573 "is_configured": false, 00:14:55.573 "data_offset": 0, 00:14:55.573 "data_size": 0 00:14:55.573 }, 00:14:55.573 { 00:14:55.573 "name": "BaseBdev3", 00:14:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.573 "is_configured": false, 00:14:55.573 "data_offset": 0, 00:14:55.573 "data_size": 0 00:14:55.573 } 00:14:55.573 ] 00:14:55.573 }' 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.573 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.139 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:56.139 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.139 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.139 [2024-12-06 13:09:42.941053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.139 BaseBdev2 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.140 [ 00:14:56.140 { 00:14:56.140 "name": "BaseBdev2", 00:14:56.140 "aliases": [ 00:14:56.140 "073ce291-43b3-4cce-967d-20345b239306" 00:14:56.140 ], 00:14:56.140 "product_name": "Malloc disk", 00:14:56.140 "block_size": 512, 00:14:56.140 "num_blocks": 65536, 00:14:56.140 "uuid": "073ce291-43b3-4cce-967d-20345b239306", 00:14:56.140 "assigned_rate_limits": { 00:14:56.140 "rw_ios_per_sec": 0, 00:14:56.140 "rw_mbytes_per_sec": 0, 00:14:56.140 "r_mbytes_per_sec": 0, 00:14:56.140 "w_mbytes_per_sec": 0 00:14:56.140 }, 00:14:56.140 "claimed": true, 00:14:56.140 "claim_type": "exclusive_write", 00:14:56.140 "zoned": false, 00:14:56.140 "supported_io_types": { 00:14:56.140 "read": true, 00:14:56.140 "write": true, 00:14:56.140 "unmap": true, 00:14:56.140 "flush": true, 00:14:56.140 "reset": true, 00:14:56.140 "nvme_admin": false, 00:14:56.140 "nvme_io": false, 00:14:56.140 "nvme_io_md": false, 00:14:56.140 "write_zeroes": true, 00:14:56.140 "zcopy": true, 00:14:56.140 "get_zone_info": false, 00:14:56.140 "zone_management": false, 00:14:56.140 "zone_append": false, 00:14:56.140 "compare": false, 00:14:56.140 "compare_and_write": false, 00:14:56.140 "abort": true, 00:14:56.140 "seek_hole": false, 00:14:56.140 "seek_data": false, 00:14:56.140 "copy": true, 00:14:56.140 "nvme_iov_md": false 00:14:56.140 }, 00:14:56.140 "memory_domains": [ 00:14:56.140 { 00:14:56.140 "dma_device_id": "system", 00:14:56.140 "dma_device_type": 1 00:14:56.140 }, 00:14:56.140 { 00:14:56.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.140 "dma_device_type": 2 00:14:56.140 } 00:14:56.140 ], 00:14:56.140 "driver_specific": {} 00:14:56.140 } 00:14:56.140 ] 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.140 13:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.140 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.140 "name": "Existed_Raid", 00:14:56.140 "uuid": "1de7c015-0a2f-4f1d-b8fa-e59d1b0989ef", 00:14:56.140 "strip_size_kb": 64, 00:14:56.140 "state": "configuring", 00:14:56.140 "raid_level": "concat", 00:14:56.140 "superblock": true, 00:14:56.140 "num_base_bdevs": 3, 00:14:56.140 "num_base_bdevs_discovered": 2, 00:14:56.140 "num_base_bdevs_operational": 3, 00:14:56.140 "base_bdevs_list": [ 00:14:56.140 { 00:14:56.140 "name": "BaseBdev1", 00:14:56.140 "uuid": "05fb9166-4a75-4db9-8c6d-232ba4369832", 00:14:56.140 "is_configured": true, 00:14:56.140 "data_offset": 2048, 00:14:56.140 "data_size": 63488 00:14:56.140 }, 00:14:56.140 { 00:14:56.140 "name": "BaseBdev2", 00:14:56.140 "uuid": "073ce291-43b3-4cce-967d-20345b239306", 00:14:56.140 "is_configured": true, 00:14:56.140 "data_offset": 2048, 00:14:56.140 "data_size": 63488 00:14:56.140 }, 00:14:56.140 { 00:14:56.140 "name": "BaseBdev3", 00:14:56.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.140 "is_configured": false, 00:14:56.140 "data_offset": 0, 00:14:56.140 "data_size": 0 00:14:56.140 } 00:14:56.140 ] 00:14:56.140 }' 00:14:56.140 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.140 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.707 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:56.707 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.707 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.707 [2024-12-06 13:09:43.532153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.707 [2024-12-06 13:09:43.532493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:56.707 [2024-12-06 13:09:43.532539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:56.708 [2024-12-06 13:09:43.532870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:56.708 BaseBdev3 00:14:56.708 [2024-12-06 13:09:43.533080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:56.708 [2024-12-06 13:09:43.533402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:56.708 [2024-12-06 13:09:43.533678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.708 [ 00:14:56.708 { 00:14:56.708 "name": "BaseBdev3", 00:14:56.708 "aliases": [ 00:14:56.708 "76dbbc9b-848f-4fab-95d2-05987e79b52f" 00:14:56.708 ], 00:14:56.708 "product_name": "Malloc disk", 00:14:56.708 "block_size": 512, 00:14:56.708 "num_blocks": 65536, 00:14:56.708 "uuid": "76dbbc9b-848f-4fab-95d2-05987e79b52f", 00:14:56.708 "assigned_rate_limits": { 00:14:56.708 "rw_ios_per_sec": 0, 00:14:56.708 "rw_mbytes_per_sec": 0, 00:14:56.708 "r_mbytes_per_sec": 0, 00:14:56.708 "w_mbytes_per_sec": 0 00:14:56.708 }, 00:14:56.708 "claimed": true, 00:14:56.708 "claim_type": "exclusive_write", 00:14:56.708 "zoned": false, 00:14:56.708 "supported_io_types": { 00:14:56.708 "read": true, 00:14:56.708 "write": true, 00:14:56.708 "unmap": true, 00:14:56.708 "flush": true, 00:14:56.708 "reset": true, 00:14:56.708 "nvme_admin": false, 00:14:56.708 "nvme_io": false, 00:14:56.708 "nvme_io_md": false, 00:14:56.708 "write_zeroes": true, 00:14:56.708 "zcopy": true, 00:14:56.708 "get_zone_info": false, 00:14:56.708 "zone_management": false, 00:14:56.708 "zone_append": false, 00:14:56.708 "compare": false, 00:14:56.708 "compare_and_write": false, 00:14:56.708 "abort": true, 00:14:56.708 "seek_hole": false, 00:14:56.708 "seek_data": false, 00:14:56.708 "copy": true, 00:14:56.708 "nvme_iov_md": false 00:14:56.708 }, 00:14:56.708 "memory_domains": [ 00:14:56.708 { 00:14:56.708 "dma_device_id": "system", 00:14:56.708 "dma_device_type": 1 00:14:56.708 }, 00:14:56.708 { 00:14:56.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.708 "dma_device_type": 2 00:14:56.708 } 00:14:56.708 ], 00:14:56.708 "driver_specific": {} 00:14:56.708 } 00:14:56.708 ] 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.708 "name": "Existed_Raid", 00:14:56.708 "uuid": "1de7c015-0a2f-4f1d-b8fa-e59d1b0989ef", 00:14:56.708 "strip_size_kb": 64, 00:14:56.708 "state": "online", 00:14:56.708 "raid_level": "concat", 00:14:56.708 "superblock": true, 00:14:56.708 "num_base_bdevs": 3, 00:14:56.708 "num_base_bdevs_discovered": 3, 00:14:56.708 "num_base_bdevs_operational": 3, 00:14:56.708 "base_bdevs_list": [ 00:14:56.708 { 00:14:56.708 "name": "BaseBdev1", 00:14:56.708 "uuid": "05fb9166-4a75-4db9-8c6d-232ba4369832", 00:14:56.708 "is_configured": true, 00:14:56.708 "data_offset": 2048, 00:14:56.708 "data_size": 63488 00:14:56.708 }, 00:14:56.708 { 00:14:56.708 "name": "BaseBdev2", 00:14:56.708 "uuid": "073ce291-43b3-4cce-967d-20345b239306", 00:14:56.708 "is_configured": true, 00:14:56.708 "data_offset": 2048, 00:14:56.708 "data_size": 63488 00:14:56.708 }, 00:14:56.708 { 00:14:56.708 "name": "BaseBdev3", 00:14:56.708 "uuid": "76dbbc9b-848f-4fab-95d2-05987e79b52f", 00:14:56.708 "is_configured": true, 00:14:56.708 "data_offset": 2048, 00:14:56.708 "data_size": 63488 00:14:56.708 } 00:14:56.708 ] 00:14:56.708 }' 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.708 13:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.284 [2024-12-06 13:09:44.100752] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.284 "name": "Existed_Raid", 00:14:57.284 "aliases": [ 00:14:57.284 "1de7c015-0a2f-4f1d-b8fa-e59d1b0989ef" 00:14:57.284 ], 00:14:57.284 "product_name": "Raid Volume", 00:14:57.284 "block_size": 512, 00:14:57.284 "num_blocks": 190464, 00:14:57.284 "uuid": "1de7c015-0a2f-4f1d-b8fa-e59d1b0989ef", 00:14:57.284 "assigned_rate_limits": { 00:14:57.284 "rw_ios_per_sec": 0, 00:14:57.284 "rw_mbytes_per_sec": 0, 00:14:57.284 "r_mbytes_per_sec": 0, 00:14:57.284 "w_mbytes_per_sec": 0 00:14:57.284 }, 00:14:57.284 "claimed": false, 00:14:57.284 "zoned": false, 00:14:57.284 "supported_io_types": { 00:14:57.284 "read": true, 00:14:57.284 "write": true, 00:14:57.284 "unmap": true, 00:14:57.284 "flush": true, 00:14:57.284 "reset": true, 00:14:57.284 "nvme_admin": false, 00:14:57.284 "nvme_io": false, 00:14:57.284 "nvme_io_md": false, 00:14:57.284 "write_zeroes": true, 00:14:57.284 "zcopy": false, 00:14:57.284 "get_zone_info": false, 00:14:57.284 "zone_management": false, 00:14:57.284 "zone_append": false, 00:14:57.284 "compare": false, 00:14:57.284 "compare_and_write": false, 00:14:57.284 "abort": false, 00:14:57.284 "seek_hole": false, 00:14:57.284 "seek_data": false, 00:14:57.284 "copy": false, 00:14:57.284 "nvme_iov_md": false 00:14:57.284 }, 00:14:57.284 "memory_domains": [ 00:14:57.284 { 00:14:57.284 "dma_device_id": "system", 00:14:57.284 "dma_device_type": 1 00:14:57.284 }, 00:14:57.284 { 00:14:57.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.284 "dma_device_type": 2 00:14:57.284 }, 00:14:57.284 { 00:14:57.284 "dma_device_id": "system", 00:14:57.284 "dma_device_type": 1 00:14:57.284 }, 00:14:57.284 { 00:14:57.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.284 "dma_device_type": 2 00:14:57.284 }, 00:14:57.284 { 00:14:57.284 "dma_device_id": "system", 00:14:57.284 "dma_device_type": 1 00:14:57.284 }, 00:14:57.284 { 00:14:57.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.284 "dma_device_type": 2 00:14:57.284 } 00:14:57.284 ], 00:14:57.284 "driver_specific": { 00:14:57.284 "raid": { 00:14:57.284 "uuid": "1de7c015-0a2f-4f1d-b8fa-e59d1b0989ef", 00:14:57.284 "strip_size_kb": 64, 00:14:57.284 "state": "online", 00:14:57.284 "raid_level": "concat", 00:14:57.284 "superblock": true, 00:14:57.284 "num_base_bdevs": 3, 00:14:57.284 "num_base_bdevs_discovered": 3, 00:14:57.284 "num_base_bdevs_operational": 3, 00:14:57.284 "base_bdevs_list": [ 00:14:57.284 { 00:14:57.284 "name": "BaseBdev1", 00:14:57.284 "uuid": "05fb9166-4a75-4db9-8c6d-232ba4369832", 00:14:57.284 "is_configured": true, 00:14:57.284 "data_offset": 2048, 00:14:57.284 "data_size": 63488 00:14:57.284 }, 00:14:57.284 { 00:14:57.284 "name": "BaseBdev2", 00:14:57.284 "uuid": "073ce291-43b3-4cce-967d-20345b239306", 00:14:57.284 "is_configured": true, 00:14:57.284 "data_offset": 2048, 00:14:57.284 "data_size": 63488 00:14:57.284 }, 00:14:57.284 { 00:14:57.284 "name": "BaseBdev3", 00:14:57.284 "uuid": "76dbbc9b-848f-4fab-95d2-05987e79b52f", 00:14:57.284 "is_configured": true, 00:14:57.284 "data_offset": 2048, 00:14:57.284 "data_size": 63488 00:14:57.284 } 00:14:57.284 ] 00:14:57.284 } 00:14:57.284 } 00:14:57.284 }' 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:57.284 BaseBdev2 00:14:57.284 BaseBdev3' 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.284 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.285 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.285 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:57.285 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.285 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.285 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.285 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.543 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.544 [2024-12-06 13:09:44.424533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.544 [2024-12-06 13:09:44.424581] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.544 [2024-12-06 13:09:44.424656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.544 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.802 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.803 "name": "Existed_Raid", 00:14:57.803 "uuid": "1de7c015-0a2f-4f1d-b8fa-e59d1b0989ef", 00:14:57.803 "strip_size_kb": 64, 00:14:57.803 "state": "offline", 00:14:57.803 "raid_level": "concat", 00:14:57.803 "superblock": true, 00:14:57.803 "num_base_bdevs": 3, 00:14:57.803 "num_base_bdevs_discovered": 2, 00:14:57.803 "num_base_bdevs_operational": 2, 00:14:57.803 "base_bdevs_list": [ 00:14:57.803 { 00:14:57.803 "name": null, 00:14:57.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.803 "is_configured": false, 00:14:57.803 "data_offset": 0, 00:14:57.803 "data_size": 63488 00:14:57.803 }, 00:14:57.803 { 00:14:57.803 "name": "BaseBdev2", 00:14:57.803 "uuid": "073ce291-43b3-4cce-967d-20345b239306", 00:14:57.803 "is_configured": true, 00:14:57.803 "data_offset": 2048, 00:14:57.803 "data_size": 63488 00:14:57.803 }, 00:14:57.803 { 00:14:57.803 "name": "BaseBdev3", 00:14:57.803 "uuid": "76dbbc9b-848f-4fab-95d2-05987e79b52f", 00:14:57.803 "is_configured": true, 00:14:57.803 "data_offset": 2048, 00:14:57.803 "data_size": 63488 00:14:57.803 } 00:14:57.803 ] 00:14:57.803 }' 00:14:57.803 13:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.803 13:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.061 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:58.061 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.061 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.061 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.061 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.061 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:58.061 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.319 [2024-12-06 13:09:45.106316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.319 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.319 [2024-12-06 13:09:45.248396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:58.319 [2024-12-06 13:09:45.248493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.576 BaseBdev2 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.576 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.577 [ 00:14:58.577 { 00:14:58.577 "name": "BaseBdev2", 00:14:58.577 "aliases": [ 00:14:58.577 "b55f6237-7f2d-4eb2-81f1-6cef27ebc6c2" 00:14:58.577 ], 00:14:58.577 "product_name": "Malloc disk", 00:14:58.577 "block_size": 512, 00:14:58.577 "num_blocks": 65536, 00:14:58.577 "uuid": "b55f6237-7f2d-4eb2-81f1-6cef27ebc6c2", 00:14:58.577 "assigned_rate_limits": { 00:14:58.577 "rw_ios_per_sec": 0, 00:14:58.577 "rw_mbytes_per_sec": 0, 00:14:58.577 "r_mbytes_per_sec": 0, 00:14:58.577 "w_mbytes_per_sec": 0 00:14:58.577 }, 00:14:58.577 "claimed": false, 00:14:58.577 "zoned": false, 00:14:58.577 "supported_io_types": { 00:14:58.577 "read": true, 00:14:58.577 "write": true, 00:14:58.577 "unmap": true, 00:14:58.577 "flush": true, 00:14:58.577 "reset": true, 00:14:58.577 "nvme_admin": false, 00:14:58.577 "nvme_io": false, 00:14:58.577 "nvme_io_md": false, 00:14:58.577 "write_zeroes": true, 00:14:58.577 "zcopy": true, 00:14:58.577 "get_zone_info": false, 00:14:58.577 "zone_management": false, 00:14:58.577 "zone_append": false, 00:14:58.577 "compare": false, 00:14:58.577 "compare_and_write": false, 00:14:58.577 "abort": true, 00:14:58.577 "seek_hole": false, 00:14:58.577 "seek_data": false, 00:14:58.577 "copy": true, 00:14:58.577 "nvme_iov_md": false 00:14:58.577 }, 00:14:58.577 "memory_domains": [ 00:14:58.577 { 00:14:58.577 "dma_device_id": "system", 00:14:58.577 "dma_device_type": 1 00:14:58.577 }, 00:14:58.577 { 00:14:58.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.577 "dma_device_type": 2 00:14:58.577 } 00:14:58.577 ], 00:14:58.577 "driver_specific": {} 00:14:58.577 } 00:14:58.577 ] 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.577 BaseBdev3 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.577 [ 00:14:58.577 { 00:14:58.577 "name": "BaseBdev3", 00:14:58.577 "aliases": [ 00:14:58.577 "97ec25f4-b877-40b5-98a1-1d852e0b5253" 00:14:58.577 ], 00:14:58.577 "product_name": "Malloc disk", 00:14:58.577 "block_size": 512, 00:14:58.577 "num_blocks": 65536, 00:14:58.577 "uuid": "97ec25f4-b877-40b5-98a1-1d852e0b5253", 00:14:58.577 "assigned_rate_limits": { 00:14:58.577 "rw_ios_per_sec": 0, 00:14:58.577 "rw_mbytes_per_sec": 0, 00:14:58.577 "r_mbytes_per_sec": 0, 00:14:58.577 "w_mbytes_per_sec": 0 00:14:58.577 }, 00:14:58.577 "claimed": false, 00:14:58.577 "zoned": false, 00:14:58.577 "supported_io_types": { 00:14:58.577 "read": true, 00:14:58.577 "write": true, 00:14:58.577 "unmap": true, 00:14:58.577 "flush": true, 00:14:58.577 "reset": true, 00:14:58.577 "nvme_admin": false, 00:14:58.577 "nvme_io": false, 00:14:58.577 "nvme_io_md": false, 00:14:58.577 "write_zeroes": true, 00:14:58.577 "zcopy": true, 00:14:58.577 "get_zone_info": false, 00:14:58.577 "zone_management": false, 00:14:58.577 "zone_append": false, 00:14:58.577 "compare": false, 00:14:58.577 "compare_and_write": false, 00:14:58.577 "abort": true, 00:14:58.577 "seek_hole": false, 00:14:58.577 "seek_data": false, 00:14:58.577 "copy": true, 00:14:58.577 "nvme_iov_md": false 00:14:58.577 }, 00:14:58.577 "memory_domains": [ 00:14:58.577 { 00:14:58.577 "dma_device_id": "system", 00:14:58.577 "dma_device_type": 1 00:14:58.577 }, 00:14:58.577 { 00:14:58.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.577 "dma_device_type": 2 00:14:58.577 } 00:14:58.577 ], 00:14:58.577 "driver_specific": {} 00:14:58.577 } 00:14:58.577 ] 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.577 [2024-12-06 13:09:45.558178] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.577 [2024-12-06 13:09:45.559055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.577 [2024-12-06 13:09:45.559234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.577 [2024-12-06 13:09:45.561978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.577 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.836 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.836 "name": "Existed_Raid", 00:14:58.836 "uuid": "3c703a60-10ca-4f46-82c5-b64d7506da3d", 00:14:58.836 "strip_size_kb": 64, 00:14:58.836 "state": "configuring", 00:14:58.836 "raid_level": "concat", 00:14:58.836 "superblock": true, 00:14:58.836 "num_base_bdevs": 3, 00:14:58.836 "num_base_bdevs_discovered": 2, 00:14:58.836 "num_base_bdevs_operational": 3, 00:14:58.836 "base_bdevs_list": [ 00:14:58.836 { 00:14:58.836 "name": "BaseBdev1", 00:14:58.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.836 "is_configured": false, 00:14:58.836 "data_offset": 0, 00:14:58.836 "data_size": 0 00:14:58.836 }, 00:14:58.836 { 00:14:58.836 "name": "BaseBdev2", 00:14:58.836 "uuid": "b55f6237-7f2d-4eb2-81f1-6cef27ebc6c2", 00:14:58.836 "is_configured": true, 00:14:58.836 "data_offset": 2048, 00:14:58.836 "data_size": 63488 00:14:58.836 }, 00:14:58.836 { 00:14:58.836 "name": "BaseBdev3", 00:14:58.836 "uuid": "97ec25f4-b877-40b5-98a1-1d852e0b5253", 00:14:58.836 "is_configured": true, 00:14:58.836 "data_offset": 2048, 00:14:58.836 "data_size": 63488 00:14:58.836 } 00:14:58.836 ] 00:14:58.836 }' 00:14:58.836 13:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.836 13:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.096 [2024-12-06 13:09:46.090658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.096 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.356 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.356 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.356 "name": "Existed_Raid", 00:14:59.356 "uuid": "3c703a60-10ca-4f46-82c5-b64d7506da3d", 00:14:59.356 "strip_size_kb": 64, 00:14:59.356 "state": "configuring", 00:14:59.356 "raid_level": "concat", 00:14:59.356 "superblock": true, 00:14:59.356 "num_base_bdevs": 3, 00:14:59.356 "num_base_bdevs_discovered": 1, 00:14:59.356 "num_base_bdevs_operational": 3, 00:14:59.356 "base_bdevs_list": [ 00:14:59.356 { 00:14:59.356 "name": "BaseBdev1", 00:14:59.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.356 "is_configured": false, 00:14:59.356 "data_offset": 0, 00:14:59.356 "data_size": 0 00:14:59.356 }, 00:14:59.356 { 00:14:59.356 "name": null, 00:14:59.356 "uuid": "b55f6237-7f2d-4eb2-81f1-6cef27ebc6c2", 00:14:59.356 "is_configured": false, 00:14:59.356 "data_offset": 0, 00:14:59.356 "data_size": 63488 00:14:59.356 }, 00:14:59.356 { 00:14:59.356 "name": "BaseBdev3", 00:14:59.356 "uuid": "97ec25f4-b877-40b5-98a1-1d852e0b5253", 00:14:59.356 "is_configured": true, 00:14:59.356 "data_offset": 2048, 00:14:59.356 "data_size": 63488 00:14:59.356 } 00:14:59.356 ] 00:14:59.356 }' 00:14:59.356 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.356 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.922 [2024-12-06 13:09:46.716526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.922 BaseBdev1 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.922 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.923 [ 00:14:59.923 { 00:14:59.923 "name": "BaseBdev1", 00:14:59.923 "aliases": [ 00:14:59.923 "6fbf15d2-d10a-4266-b941-8df53c3008f0" 00:14:59.923 ], 00:14:59.923 "product_name": "Malloc disk", 00:14:59.923 "block_size": 512, 00:14:59.923 "num_blocks": 65536, 00:14:59.923 "uuid": "6fbf15d2-d10a-4266-b941-8df53c3008f0", 00:14:59.923 "assigned_rate_limits": { 00:14:59.923 "rw_ios_per_sec": 0, 00:14:59.923 "rw_mbytes_per_sec": 0, 00:14:59.923 "r_mbytes_per_sec": 0, 00:14:59.923 "w_mbytes_per_sec": 0 00:14:59.923 }, 00:14:59.923 "claimed": true, 00:14:59.923 "claim_type": "exclusive_write", 00:14:59.923 "zoned": false, 00:14:59.923 "supported_io_types": { 00:14:59.923 "read": true, 00:14:59.923 "write": true, 00:14:59.923 "unmap": true, 00:14:59.923 "flush": true, 00:14:59.923 "reset": true, 00:14:59.923 "nvme_admin": false, 00:14:59.923 "nvme_io": false, 00:14:59.923 "nvme_io_md": false, 00:14:59.923 "write_zeroes": true, 00:14:59.923 "zcopy": true, 00:14:59.923 "get_zone_info": false, 00:14:59.923 "zone_management": false, 00:14:59.923 "zone_append": false, 00:14:59.923 "compare": false, 00:14:59.923 "compare_and_write": false, 00:14:59.923 "abort": true, 00:14:59.923 "seek_hole": false, 00:14:59.923 "seek_data": false, 00:14:59.923 "copy": true, 00:14:59.923 "nvme_iov_md": false 00:14:59.923 }, 00:14:59.923 "memory_domains": [ 00:14:59.923 { 00:14:59.923 "dma_device_id": "system", 00:14:59.923 "dma_device_type": 1 00:14:59.923 }, 00:14:59.923 { 00:14:59.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.923 "dma_device_type": 2 00:14:59.923 } 00:14:59.923 ], 00:14:59.923 "driver_specific": {} 00:14:59.923 } 00:14:59.923 ] 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.923 "name": "Existed_Raid", 00:14:59.923 "uuid": "3c703a60-10ca-4f46-82c5-b64d7506da3d", 00:14:59.923 "strip_size_kb": 64, 00:14:59.923 "state": "configuring", 00:14:59.923 "raid_level": "concat", 00:14:59.923 "superblock": true, 00:14:59.923 "num_base_bdevs": 3, 00:14:59.923 "num_base_bdevs_discovered": 2, 00:14:59.923 "num_base_bdevs_operational": 3, 00:14:59.923 "base_bdevs_list": [ 00:14:59.923 { 00:14:59.923 "name": "BaseBdev1", 00:14:59.923 "uuid": "6fbf15d2-d10a-4266-b941-8df53c3008f0", 00:14:59.923 "is_configured": true, 00:14:59.923 "data_offset": 2048, 00:14:59.923 "data_size": 63488 00:14:59.923 }, 00:14:59.923 { 00:14:59.923 "name": null, 00:14:59.923 "uuid": "b55f6237-7f2d-4eb2-81f1-6cef27ebc6c2", 00:14:59.923 "is_configured": false, 00:14:59.923 "data_offset": 0, 00:14:59.923 "data_size": 63488 00:14:59.923 }, 00:14:59.923 { 00:14:59.923 "name": "BaseBdev3", 00:14:59.923 "uuid": "97ec25f4-b877-40b5-98a1-1d852e0b5253", 00:14:59.923 "is_configured": true, 00:14:59.923 "data_offset": 2048, 00:14:59.923 "data_size": 63488 00:14:59.923 } 00:14:59.923 ] 00:14:59.923 }' 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.923 13:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.490 [2024-12-06 13:09:47.328817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.490 "name": "Existed_Raid", 00:15:00.490 "uuid": "3c703a60-10ca-4f46-82c5-b64d7506da3d", 00:15:00.490 "strip_size_kb": 64, 00:15:00.490 "state": "configuring", 00:15:00.490 "raid_level": "concat", 00:15:00.490 "superblock": true, 00:15:00.490 "num_base_bdevs": 3, 00:15:00.490 "num_base_bdevs_discovered": 1, 00:15:00.490 "num_base_bdevs_operational": 3, 00:15:00.490 "base_bdevs_list": [ 00:15:00.490 { 00:15:00.490 "name": "BaseBdev1", 00:15:00.490 "uuid": "6fbf15d2-d10a-4266-b941-8df53c3008f0", 00:15:00.490 "is_configured": true, 00:15:00.490 "data_offset": 2048, 00:15:00.490 "data_size": 63488 00:15:00.490 }, 00:15:00.490 { 00:15:00.490 "name": null, 00:15:00.490 "uuid": "b55f6237-7f2d-4eb2-81f1-6cef27ebc6c2", 00:15:00.490 "is_configured": false, 00:15:00.490 "data_offset": 0, 00:15:00.490 "data_size": 63488 00:15:00.490 }, 00:15:00.490 { 00:15:00.490 "name": null, 00:15:00.490 "uuid": "97ec25f4-b877-40b5-98a1-1d852e0b5253", 00:15:00.490 "is_configured": false, 00:15:00.490 "data_offset": 0, 00:15:00.490 "data_size": 63488 00:15:00.490 } 00:15:00.490 ] 00:15:00.490 }' 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.490 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.059 [2024-12-06 13:09:47.932914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.059 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.060 "name": "Existed_Raid", 00:15:01.060 "uuid": "3c703a60-10ca-4f46-82c5-b64d7506da3d", 00:15:01.060 "strip_size_kb": 64, 00:15:01.060 "state": "configuring", 00:15:01.060 "raid_level": "concat", 00:15:01.060 "superblock": true, 00:15:01.060 "num_base_bdevs": 3, 00:15:01.060 "num_base_bdevs_discovered": 2, 00:15:01.060 "num_base_bdevs_operational": 3, 00:15:01.060 "base_bdevs_list": [ 00:15:01.060 { 00:15:01.060 "name": "BaseBdev1", 00:15:01.060 "uuid": "6fbf15d2-d10a-4266-b941-8df53c3008f0", 00:15:01.060 "is_configured": true, 00:15:01.060 "data_offset": 2048, 00:15:01.060 "data_size": 63488 00:15:01.060 }, 00:15:01.060 { 00:15:01.060 "name": null, 00:15:01.060 "uuid": "b55f6237-7f2d-4eb2-81f1-6cef27ebc6c2", 00:15:01.060 "is_configured": false, 00:15:01.060 "data_offset": 0, 00:15:01.060 "data_size": 63488 00:15:01.060 }, 00:15:01.060 { 00:15:01.060 "name": "BaseBdev3", 00:15:01.060 "uuid": "97ec25f4-b877-40b5-98a1-1d852e0b5253", 00:15:01.060 "is_configured": true, 00:15:01.060 "data_offset": 2048, 00:15:01.060 "data_size": 63488 00:15:01.060 } 00:15:01.060 ] 00:15:01.060 }' 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.060 13:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.627 [2024-12-06 13:09:48.489119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.627 "name": "Existed_Raid", 00:15:01.627 "uuid": "3c703a60-10ca-4f46-82c5-b64d7506da3d", 00:15:01.627 "strip_size_kb": 64, 00:15:01.627 "state": "configuring", 00:15:01.627 "raid_level": "concat", 00:15:01.627 "superblock": true, 00:15:01.627 "num_base_bdevs": 3, 00:15:01.627 "num_base_bdevs_discovered": 1, 00:15:01.627 "num_base_bdevs_operational": 3, 00:15:01.627 "base_bdevs_list": [ 00:15:01.627 { 00:15:01.627 "name": null, 00:15:01.627 "uuid": "6fbf15d2-d10a-4266-b941-8df53c3008f0", 00:15:01.627 "is_configured": false, 00:15:01.627 "data_offset": 0, 00:15:01.627 "data_size": 63488 00:15:01.627 }, 00:15:01.627 { 00:15:01.627 "name": null, 00:15:01.627 "uuid": "b55f6237-7f2d-4eb2-81f1-6cef27ebc6c2", 00:15:01.627 "is_configured": false, 00:15:01.627 "data_offset": 0, 00:15:01.627 "data_size": 63488 00:15:01.627 }, 00:15:01.627 { 00:15:01.627 "name": "BaseBdev3", 00:15:01.627 "uuid": "97ec25f4-b877-40b5-98a1-1d852e0b5253", 00:15:01.627 "is_configured": true, 00:15:01.627 "data_offset": 2048, 00:15:01.627 "data_size": 63488 00:15:01.627 } 00:15:01.627 ] 00:15:01.627 }' 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.627 13:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.195 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.195 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.195 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.195 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.195 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.195 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:02.195 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:02.195 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.195 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.196 [2024-12-06 13:09:49.141804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.196 "name": "Existed_Raid", 00:15:02.196 "uuid": "3c703a60-10ca-4f46-82c5-b64d7506da3d", 00:15:02.196 "strip_size_kb": 64, 00:15:02.196 "state": "configuring", 00:15:02.196 "raid_level": "concat", 00:15:02.196 "superblock": true, 00:15:02.196 "num_base_bdevs": 3, 00:15:02.196 "num_base_bdevs_discovered": 2, 00:15:02.196 "num_base_bdevs_operational": 3, 00:15:02.196 "base_bdevs_list": [ 00:15:02.196 { 00:15:02.196 "name": null, 00:15:02.196 "uuid": "6fbf15d2-d10a-4266-b941-8df53c3008f0", 00:15:02.196 "is_configured": false, 00:15:02.196 "data_offset": 0, 00:15:02.196 "data_size": 63488 00:15:02.196 }, 00:15:02.196 { 00:15:02.196 "name": "BaseBdev2", 00:15:02.196 "uuid": "b55f6237-7f2d-4eb2-81f1-6cef27ebc6c2", 00:15:02.196 "is_configured": true, 00:15:02.196 "data_offset": 2048, 00:15:02.196 "data_size": 63488 00:15:02.196 }, 00:15:02.196 { 00:15:02.196 "name": "BaseBdev3", 00:15:02.196 "uuid": "97ec25f4-b877-40b5-98a1-1d852e0b5253", 00:15:02.196 "is_configured": true, 00:15:02.196 "data_offset": 2048, 00:15:02.196 "data_size": 63488 00:15:02.196 } 00:15:02.196 ] 00:15:02.196 }' 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.196 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6fbf15d2-d10a-4266-b941-8df53c3008f0 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.796 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.056 [2024-12-06 13:09:49.834285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:03.056 [2024-12-06 13:09:49.834630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:03.056 [2024-12-06 13:09:49.834657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:03.056 NewBaseBdev 00:15:03.056 [2024-12-06 13:09:49.835005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:03.056 [2024-12-06 13:09:49.835208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:03.056 [2024-12-06 13:09:49.835225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:03.056 [2024-12-06 13:09:49.835401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.056 [ 00:15:03.056 { 00:15:03.056 "name": "NewBaseBdev", 00:15:03.056 "aliases": [ 00:15:03.056 "6fbf15d2-d10a-4266-b941-8df53c3008f0" 00:15:03.056 ], 00:15:03.056 "product_name": "Malloc disk", 00:15:03.056 "block_size": 512, 00:15:03.056 "num_blocks": 65536, 00:15:03.056 "uuid": "6fbf15d2-d10a-4266-b941-8df53c3008f0", 00:15:03.056 "assigned_rate_limits": { 00:15:03.056 "rw_ios_per_sec": 0, 00:15:03.056 "rw_mbytes_per_sec": 0, 00:15:03.056 "r_mbytes_per_sec": 0, 00:15:03.056 "w_mbytes_per_sec": 0 00:15:03.056 }, 00:15:03.056 "claimed": true, 00:15:03.056 "claim_type": "exclusive_write", 00:15:03.056 "zoned": false, 00:15:03.056 "supported_io_types": { 00:15:03.056 "read": true, 00:15:03.056 "write": true, 00:15:03.056 "unmap": true, 00:15:03.056 "flush": true, 00:15:03.056 "reset": true, 00:15:03.056 "nvme_admin": false, 00:15:03.056 "nvme_io": false, 00:15:03.056 "nvme_io_md": false, 00:15:03.056 "write_zeroes": true, 00:15:03.056 "zcopy": true, 00:15:03.056 "get_zone_info": false, 00:15:03.056 "zone_management": false, 00:15:03.056 "zone_append": false, 00:15:03.056 "compare": false, 00:15:03.056 "compare_and_write": false, 00:15:03.056 "abort": true, 00:15:03.056 "seek_hole": false, 00:15:03.056 "seek_data": false, 00:15:03.056 "copy": true, 00:15:03.056 "nvme_iov_md": false 00:15:03.056 }, 00:15:03.056 "memory_domains": [ 00:15:03.056 { 00:15:03.056 "dma_device_id": "system", 00:15:03.056 "dma_device_type": 1 00:15:03.056 }, 00:15:03.056 { 00:15:03.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.056 "dma_device_type": 2 00:15:03.056 } 00:15:03.056 ], 00:15:03.056 "driver_specific": {} 00:15:03.056 } 00:15:03.056 ] 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.056 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.057 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.057 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.057 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.057 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.057 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.057 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.057 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.057 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.057 "name": "Existed_Raid", 00:15:03.057 "uuid": "3c703a60-10ca-4f46-82c5-b64d7506da3d", 00:15:03.057 "strip_size_kb": 64, 00:15:03.057 "state": "online", 00:15:03.057 "raid_level": "concat", 00:15:03.057 "superblock": true, 00:15:03.057 "num_base_bdevs": 3, 00:15:03.057 "num_base_bdevs_discovered": 3, 00:15:03.057 "num_base_bdevs_operational": 3, 00:15:03.057 "base_bdevs_list": [ 00:15:03.057 { 00:15:03.057 "name": "NewBaseBdev", 00:15:03.057 "uuid": "6fbf15d2-d10a-4266-b941-8df53c3008f0", 00:15:03.057 "is_configured": true, 00:15:03.057 "data_offset": 2048, 00:15:03.057 "data_size": 63488 00:15:03.057 }, 00:15:03.057 { 00:15:03.057 "name": "BaseBdev2", 00:15:03.057 "uuid": "b55f6237-7f2d-4eb2-81f1-6cef27ebc6c2", 00:15:03.057 "is_configured": true, 00:15:03.057 "data_offset": 2048, 00:15:03.057 "data_size": 63488 00:15:03.057 }, 00:15:03.057 { 00:15:03.057 "name": "BaseBdev3", 00:15:03.057 "uuid": "97ec25f4-b877-40b5-98a1-1d852e0b5253", 00:15:03.057 "is_configured": true, 00:15:03.057 "data_offset": 2048, 00:15:03.057 "data_size": 63488 00:15:03.057 } 00:15:03.057 ] 00:15:03.057 }' 00:15:03.057 13:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.057 13:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:03.625 [2024-12-06 13:09:50.402949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.625 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:03.625 "name": "Existed_Raid", 00:15:03.625 "aliases": [ 00:15:03.625 "3c703a60-10ca-4f46-82c5-b64d7506da3d" 00:15:03.625 ], 00:15:03.625 "product_name": "Raid Volume", 00:15:03.625 "block_size": 512, 00:15:03.625 "num_blocks": 190464, 00:15:03.625 "uuid": "3c703a60-10ca-4f46-82c5-b64d7506da3d", 00:15:03.625 "assigned_rate_limits": { 00:15:03.626 "rw_ios_per_sec": 0, 00:15:03.626 "rw_mbytes_per_sec": 0, 00:15:03.626 "r_mbytes_per_sec": 0, 00:15:03.626 "w_mbytes_per_sec": 0 00:15:03.626 }, 00:15:03.626 "claimed": false, 00:15:03.626 "zoned": false, 00:15:03.626 "supported_io_types": { 00:15:03.626 "read": true, 00:15:03.626 "write": true, 00:15:03.626 "unmap": true, 00:15:03.626 "flush": true, 00:15:03.626 "reset": true, 00:15:03.626 "nvme_admin": false, 00:15:03.626 "nvme_io": false, 00:15:03.626 "nvme_io_md": false, 00:15:03.626 "write_zeroes": true, 00:15:03.626 "zcopy": false, 00:15:03.626 "get_zone_info": false, 00:15:03.626 "zone_management": false, 00:15:03.626 "zone_append": false, 00:15:03.626 "compare": false, 00:15:03.626 "compare_and_write": false, 00:15:03.626 "abort": false, 00:15:03.626 "seek_hole": false, 00:15:03.626 "seek_data": false, 00:15:03.626 "copy": false, 00:15:03.626 "nvme_iov_md": false 00:15:03.626 }, 00:15:03.626 "memory_domains": [ 00:15:03.626 { 00:15:03.626 "dma_device_id": "system", 00:15:03.626 "dma_device_type": 1 00:15:03.626 }, 00:15:03.626 { 00:15:03.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.626 "dma_device_type": 2 00:15:03.626 }, 00:15:03.626 { 00:15:03.626 "dma_device_id": "system", 00:15:03.626 "dma_device_type": 1 00:15:03.626 }, 00:15:03.626 { 00:15:03.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.626 "dma_device_type": 2 00:15:03.626 }, 00:15:03.626 { 00:15:03.626 "dma_device_id": "system", 00:15:03.626 "dma_device_type": 1 00:15:03.626 }, 00:15:03.626 { 00:15:03.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.626 "dma_device_type": 2 00:15:03.626 } 00:15:03.626 ], 00:15:03.626 "driver_specific": { 00:15:03.626 "raid": { 00:15:03.626 "uuid": "3c703a60-10ca-4f46-82c5-b64d7506da3d", 00:15:03.626 "strip_size_kb": 64, 00:15:03.626 "state": "online", 00:15:03.626 "raid_level": "concat", 00:15:03.626 "superblock": true, 00:15:03.626 "num_base_bdevs": 3, 00:15:03.626 "num_base_bdevs_discovered": 3, 00:15:03.626 "num_base_bdevs_operational": 3, 00:15:03.626 "base_bdevs_list": [ 00:15:03.626 { 00:15:03.626 "name": "NewBaseBdev", 00:15:03.626 "uuid": "6fbf15d2-d10a-4266-b941-8df53c3008f0", 00:15:03.626 "is_configured": true, 00:15:03.626 "data_offset": 2048, 00:15:03.626 "data_size": 63488 00:15:03.626 }, 00:15:03.626 { 00:15:03.626 "name": "BaseBdev2", 00:15:03.626 "uuid": "b55f6237-7f2d-4eb2-81f1-6cef27ebc6c2", 00:15:03.626 "is_configured": true, 00:15:03.626 "data_offset": 2048, 00:15:03.626 "data_size": 63488 00:15:03.626 }, 00:15:03.626 { 00:15:03.626 "name": "BaseBdev3", 00:15:03.626 "uuid": "97ec25f4-b877-40b5-98a1-1d852e0b5253", 00:15:03.626 "is_configured": true, 00:15:03.626 "data_offset": 2048, 00:15:03.626 "data_size": 63488 00:15:03.626 } 00:15:03.626 ] 00:15:03.626 } 00:15:03.626 } 00:15:03.626 }' 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:03.626 BaseBdev2 00:15:03.626 BaseBdev3' 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.626 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.885 [2024-12-06 13:09:50.710563] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.885 [2024-12-06 13:09:50.710623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.885 [2024-12-06 13:09:50.710718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.885 [2024-12-06 13:09:50.710796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.885 [2024-12-06 13:09:50.710827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66501 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66501 ']' 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66501 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66501 00:15:03.885 killing process with pid 66501 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66501' 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66501 00:15:03.885 [2024-12-06 13:09:50.748441] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.885 13:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66501 00:15:04.149 [2024-12-06 13:09:51.024292] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:05.523 13:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:05.523 ************************************ 00:15:05.523 END TEST raid_state_function_test_sb 00:15:05.523 ************************************ 00:15:05.523 00:15:05.523 real 0m12.045s 00:15:05.523 user 0m19.963s 00:15:05.523 sys 0m1.674s 00:15:05.523 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.523 13:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.523 13:09:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:15:05.523 13:09:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:05.523 13:09:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.523 13:09:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:05.523 ************************************ 00:15:05.524 START TEST raid_superblock_test 00:15:05.524 ************************************ 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:05.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67142 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67142 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67142 ']' 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.524 13:09:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.524 [2024-12-06 13:09:52.281728] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:15:05.524 [2024-12-06 13:09:52.281921] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67142 ] 00:15:05.524 [2024-12-06 13:09:52.473972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.782 [2024-12-06 13:09:52.624065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.040 [2024-12-06 13:09:52.854347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.040 [2024-12-06 13:09:52.854439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.607 malloc1 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.607 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.607 [2024-12-06 13:09:53.418539] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.607 [2024-12-06 13:09:53.418633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.608 [2024-12-06 13:09:53.418671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:06.608 [2024-12-06 13:09:53.418689] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.608 [2024-12-06 13:09:53.421765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.608 [2024-12-06 13:09:53.421809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.608 pt1 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.608 malloc2 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.608 [2024-12-06 13:09:53.478517] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:06.608 [2024-12-06 13:09:53.478592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.608 [2024-12-06 13:09:53.478641] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:06.608 [2024-12-06 13:09:53.478657] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.608 [2024-12-06 13:09:53.481628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.608 [2024-12-06 13:09:53.481674] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:06.608 pt2 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.608 malloc3 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.608 [2024-12-06 13:09:53.551216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:06.608 [2024-12-06 13:09:53.551294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.608 [2024-12-06 13:09:53.551332] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:06.608 [2024-12-06 13:09:53.551348] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.608 [2024-12-06 13:09:53.554272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.608 [2024-12-06 13:09:53.554318] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:06.608 pt3 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.608 [2024-12-06 13:09:53.559276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.608 [2024-12-06 13:09:53.561876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.608 [2024-12-06 13:09:53.562009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:06.608 [2024-12-06 13:09:53.562238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:06.608 [2024-12-06 13:09:53.562273] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:06.608 [2024-12-06 13:09:53.562635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:06.608 [2024-12-06 13:09:53.562870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:06.608 [2024-12-06 13:09:53.562897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:06.608 [2024-12-06 13:09:53.563093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.608 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.865 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.865 "name": "raid_bdev1", 00:15:06.865 "uuid": "d3b19b6b-5993-45dc-9de8-2c0197b48f53", 00:15:06.865 "strip_size_kb": 64, 00:15:06.865 "state": "online", 00:15:06.865 "raid_level": "concat", 00:15:06.865 "superblock": true, 00:15:06.865 "num_base_bdevs": 3, 00:15:06.865 "num_base_bdevs_discovered": 3, 00:15:06.865 "num_base_bdevs_operational": 3, 00:15:06.865 "base_bdevs_list": [ 00:15:06.865 { 00:15:06.865 "name": "pt1", 00:15:06.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.865 "is_configured": true, 00:15:06.865 "data_offset": 2048, 00:15:06.865 "data_size": 63488 00:15:06.865 }, 00:15:06.865 { 00:15:06.865 "name": "pt2", 00:15:06.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.865 "is_configured": true, 00:15:06.865 "data_offset": 2048, 00:15:06.865 "data_size": 63488 00:15:06.865 }, 00:15:06.865 { 00:15:06.865 "name": "pt3", 00:15:06.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.865 "is_configured": true, 00:15:06.865 "data_offset": 2048, 00:15:06.865 "data_size": 63488 00:15:06.865 } 00:15:06.865 ] 00:15:06.865 }' 00:15:06.865 13:09:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.865 13:09:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.123 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:07.123 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:07.123 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:07.123 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:07.123 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:07.123 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:07.123 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.123 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.123 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.123 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:07.123 [2024-12-06 13:09:54.107859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.123 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.380 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:07.380 "name": "raid_bdev1", 00:15:07.380 "aliases": [ 00:15:07.380 "d3b19b6b-5993-45dc-9de8-2c0197b48f53" 00:15:07.380 ], 00:15:07.380 "product_name": "Raid Volume", 00:15:07.380 "block_size": 512, 00:15:07.380 "num_blocks": 190464, 00:15:07.380 "uuid": "d3b19b6b-5993-45dc-9de8-2c0197b48f53", 00:15:07.380 "assigned_rate_limits": { 00:15:07.380 "rw_ios_per_sec": 0, 00:15:07.380 "rw_mbytes_per_sec": 0, 00:15:07.380 "r_mbytes_per_sec": 0, 00:15:07.380 "w_mbytes_per_sec": 0 00:15:07.380 }, 00:15:07.380 "claimed": false, 00:15:07.380 "zoned": false, 00:15:07.380 "supported_io_types": { 00:15:07.380 "read": true, 00:15:07.380 "write": true, 00:15:07.380 "unmap": true, 00:15:07.380 "flush": true, 00:15:07.380 "reset": true, 00:15:07.380 "nvme_admin": false, 00:15:07.380 "nvme_io": false, 00:15:07.380 "nvme_io_md": false, 00:15:07.380 "write_zeroes": true, 00:15:07.380 "zcopy": false, 00:15:07.380 "get_zone_info": false, 00:15:07.380 "zone_management": false, 00:15:07.380 "zone_append": false, 00:15:07.380 "compare": false, 00:15:07.380 "compare_and_write": false, 00:15:07.380 "abort": false, 00:15:07.380 "seek_hole": false, 00:15:07.380 "seek_data": false, 00:15:07.380 "copy": false, 00:15:07.380 "nvme_iov_md": false 00:15:07.380 }, 00:15:07.380 "memory_domains": [ 00:15:07.380 { 00:15:07.380 "dma_device_id": "system", 00:15:07.380 "dma_device_type": 1 00:15:07.380 }, 00:15:07.380 { 00:15:07.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.380 "dma_device_type": 2 00:15:07.380 }, 00:15:07.380 { 00:15:07.380 "dma_device_id": "system", 00:15:07.380 "dma_device_type": 1 00:15:07.380 }, 00:15:07.380 { 00:15:07.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.380 "dma_device_type": 2 00:15:07.380 }, 00:15:07.380 { 00:15:07.380 "dma_device_id": "system", 00:15:07.380 "dma_device_type": 1 00:15:07.380 }, 00:15:07.380 { 00:15:07.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.380 "dma_device_type": 2 00:15:07.380 } 00:15:07.380 ], 00:15:07.380 "driver_specific": { 00:15:07.380 "raid": { 00:15:07.380 "uuid": "d3b19b6b-5993-45dc-9de8-2c0197b48f53", 00:15:07.380 "strip_size_kb": 64, 00:15:07.380 "state": "online", 00:15:07.380 "raid_level": "concat", 00:15:07.380 "superblock": true, 00:15:07.380 "num_base_bdevs": 3, 00:15:07.380 "num_base_bdevs_discovered": 3, 00:15:07.380 "num_base_bdevs_operational": 3, 00:15:07.380 "base_bdevs_list": [ 00:15:07.380 { 00:15:07.380 "name": "pt1", 00:15:07.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.380 "is_configured": true, 00:15:07.380 "data_offset": 2048, 00:15:07.380 "data_size": 63488 00:15:07.380 }, 00:15:07.380 { 00:15:07.380 "name": "pt2", 00:15:07.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.380 "is_configured": true, 00:15:07.380 "data_offset": 2048, 00:15:07.380 "data_size": 63488 00:15:07.380 }, 00:15:07.380 { 00:15:07.380 "name": "pt3", 00:15:07.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.380 "is_configured": true, 00:15:07.380 "data_offset": 2048, 00:15:07.380 "data_size": 63488 00:15:07.380 } 00:15:07.380 ] 00:15:07.380 } 00:15:07.380 } 00:15:07.380 }' 00:15:07.380 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.380 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:07.380 pt2 00:15:07.380 pt3' 00:15:07.380 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.381 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.637 [2024-12-06 13:09:54.427875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d3b19b6b-5993-45dc-9de8-2c0197b48f53 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d3b19b6b-5993-45dc-9de8-2c0197b48f53 ']' 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.637 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.637 [2024-12-06 13:09:54.483513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.637 [2024-12-06 13:09:54.483561] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.638 [2024-12-06 13:09:54.483689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.638 [2024-12-06 13:09:54.483799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.638 [2024-12-06 13:09:54.483825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.638 [2024-12-06 13:09:54.635653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:07.638 [2024-12-06 13:09:54.638387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:07.638 [2024-12-06 13:09:54.638489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:07.638 [2024-12-06 13:09:54.638581] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:07.638 [2024-12-06 13:09:54.638689] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:07.638 [2024-12-06 13:09:54.638723] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:07.638 [2024-12-06 13:09:54.638752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.638 [2024-12-06 13:09:54.638767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:07.638 request: 00:15:07.638 { 00:15:07.638 "name": "raid_bdev1", 00:15:07.638 "raid_level": "concat", 00:15:07.638 "base_bdevs": [ 00:15:07.638 "malloc1", 00:15:07.638 "malloc2", 00:15:07.638 "malloc3" 00:15:07.638 ], 00:15:07.638 "strip_size_kb": 64, 00:15:07.638 "superblock": false, 00:15:07.638 "method": "bdev_raid_create", 00:15:07.638 "req_id": 1 00:15:07.638 } 00:15:07.638 Got JSON-RPC error response 00:15:07.638 response: 00:15:07.638 { 00:15:07.638 "code": -17, 00:15:07.638 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:07.638 } 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.638 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.895 [2024-12-06 13:09:54.699569] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:07.895 [2024-12-06 13:09:54.699661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.895 [2024-12-06 13:09:54.699703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:07.895 [2024-12-06 13:09:54.699719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.895 [2024-12-06 13:09:54.702939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.895 [2024-12-06 13:09:54.702984] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:07.895 [2024-12-06 13:09:54.703114] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:07.895 [2024-12-06 13:09:54.703197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.895 pt1 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.895 "name": "raid_bdev1", 00:15:07.895 "uuid": "d3b19b6b-5993-45dc-9de8-2c0197b48f53", 00:15:07.895 "strip_size_kb": 64, 00:15:07.895 "state": "configuring", 00:15:07.895 "raid_level": "concat", 00:15:07.895 "superblock": true, 00:15:07.895 "num_base_bdevs": 3, 00:15:07.895 "num_base_bdevs_discovered": 1, 00:15:07.895 "num_base_bdevs_operational": 3, 00:15:07.895 "base_bdevs_list": [ 00:15:07.895 { 00:15:07.895 "name": "pt1", 00:15:07.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.895 "is_configured": true, 00:15:07.895 "data_offset": 2048, 00:15:07.895 "data_size": 63488 00:15:07.895 }, 00:15:07.895 { 00:15:07.895 "name": null, 00:15:07.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.895 "is_configured": false, 00:15:07.895 "data_offset": 2048, 00:15:07.895 "data_size": 63488 00:15:07.895 }, 00:15:07.895 { 00:15:07.895 "name": null, 00:15:07.895 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.895 "is_configured": false, 00:15:07.895 "data_offset": 2048, 00:15:07.895 "data_size": 63488 00:15:07.895 } 00:15:07.895 ] 00:15:07.895 }' 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.895 13:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.461 [2024-12-06 13:09:55.283772] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.461 [2024-12-06 13:09:55.283879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.461 [2024-12-06 13:09:55.283923] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:08.461 [2024-12-06 13:09:55.283939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.461 [2024-12-06 13:09:55.284615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.461 [2024-12-06 13:09:55.284657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.461 [2024-12-06 13:09:55.284797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:08.461 [2024-12-06 13:09:55.284848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:08.461 pt2 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.461 [2024-12-06 13:09:55.291721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.461 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.462 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.462 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.462 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.462 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.462 "name": "raid_bdev1", 00:15:08.462 "uuid": "d3b19b6b-5993-45dc-9de8-2c0197b48f53", 00:15:08.462 "strip_size_kb": 64, 00:15:08.462 "state": "configuring", 00:15:08.462 "raid_level": "concat", 00:15:08.462 "superblock": true, 00:15:08.462 "num_base_bdevs": 3, 00:15:08.462 "num_base_bdevs_discovered": 1, 00:15:08.462 "num_base_bdevs_operational": 3, 00:15:08.462 "base_bdevs_list": [ 00:15:08.462 { 00:15:08.462 "name": "pt1", 00:15:08.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.462 "is_configured": true, 00:15:08.462 "data_offset": 2048, 00:15:08.462 "data_size": 63488 00:15:08.462 }, 00:15:08.462 { 00:15:08.462 "name": null, 00:15:08.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.462 "is_configured": false, 00:15:08.462 "data_offset": 0, 00:15:08.462 "data_size": 63488 00:15:08.462 }, 00:15:08.462 { 00:15:08.462 "name": null, 00:15:08.462 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.462 "is_configured": false, 00:15:08.462 "data_offset": 2048, 00:15:08.462 "data_size": 63488 00:15:08.462 } 00:15:08.462 ] 00:15:08.462 }' 00:15:08.462 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.462 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.028 [2024-12-06 13:09:55.807923] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.028 [2024-12-06 13:09:55.808029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.028 [2024-12-06 13:09:55.808064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:09.028 [2024-12-06 13:09:55.808084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.028 [2024-12-06 13:09:55.808806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.028 [2024-12-06 13:09:55.808841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.028 [2024-12-06 13:09:55.808961] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.028 [2024-12-06 13:09:55.809005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.028 pt2 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.028 [2024-12-06 13:09:55.815816] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:09.028 [2024-12-06 13:09:55.816034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.028 [2024-12-06 13:09:55.816069] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:09.028 [2024-12-06 13:09:55.816087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.028 [2024-12-06 13:09:55.816602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.028 [2024-12-06 13:09:55.816648] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:09.028 [2024-12-06 13:09:55.816741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:09.028 [2024-12-06 13:09:55.816776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:09.028 [2024-12-06 13:09:55.816937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:09.028 [2024-12-06 13:09:55.816967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:09.028 [2024-12-06 13:09:55.817307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:09.028 [2024-12-06 13:09:55.817533] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:09.028 [2024-12-06 13:09:55.817550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:09.028 [2024-12-06 13:09:55.817750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.028 pt3 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.028 "name": "raid_bdev1", 00:15:09.028 "uuid": "d3b19b6b-5993-45dc-9de8-2c0197b48f53", 00:15:09.028 "strip_size_kb": 64, 00:15:09.028 "state": "online", 00:15:09.028 "raid_level": "concat", 00:15:09.028 "superblock": true, 00:15:09.028 "num_base_bdevs": 3, 00:15:09.028 "num_base_bdevs_discovered": 3, 00:15:09.028 "num_base_bdevs_operational": 3, 00:15:09.028 "base_bdevs_list": [ 00:15:09.028 { 00:15:09.028 "name": "pt1", 00:15:09.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.028 "is_configured": true, 00:15:09.028 "data_offset": 2048, 00:15:09.028 "data_size": 63488 00:15:09.028 }, 00:15:09.028 { 00:15:09.028 "name": "pt2", 00:15:09.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.028 "is_configured": true, 00:15:09.028 "data_offset": 2048, 00:15:09.028 "data_size": 63488 00:15:09.028 }, 00:15:09.028 { 00:15:09.028 "name": "pt3", 00:15:09.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.028 "is_configured": true, 00:15:09.028 "data_offset": 2048, 00:15:09.028 "data_size": 63488 00:15:09.028 } 00:15:09.028 ] 00:15:09.028 }' 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.028 13:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.286 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:09.286 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:09.286 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:09.286 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:09.286 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:09.286 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:09.544 [2024-12-06 13:09:56.308485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:09.544 "name": "raid_bdev1", 00:15:09.544 "aliases": [ 00:15:09.544 "d3b19b6b-5993-45dc-9de8-2c0197b48f53" 00:15:09.544 ], 00:15:09.544 "product_name": "Raid Volume", 00:15:09.544 "block_size": 512, 00:15:09.544 "num_blocks": 190464, 00:15:09.544 "uuid": "d3b19b6b-5993-45dc-9de8-2c0197b48f53", 00:15:09.544 "assigned_rate_limits": { 00:15:09.544 "rw_ios_per_sec": 0, 00:15:09.544 "rw_mbytes_per_sec": 0, 00:15:09.544 "r_mbytes_per_sec": 0, 00:15:09.544 "w_mbytes_per_sec": 0 00:15:09.544 }, 00:15:09.544 "claimed": false, 00:15:09.544 "zoned": false, 00:15:09.544 "supported_io_types": { 00:15:09.544 "read": true, 00:15:09.544 "write": true, 00:15:09.544 "unmap": true, 00:15:09.544 "flush": true, 00:15:09.544 "reset": true, 00:15:09.544 "nvme_admin": false, 00:15:09.544 "nvme_io": false, 00:15:09.544 "nvme_io_md": false, 00:15:09.544 "write_zeroes": true, 00:15:09.544 "zcopy": false, 00:15:09.544 "get_zone_info": false, 00:15:09.544 "zone_management": false, 00:15:09.544 "zone_append": false, 00:15:09.544 "compare": false, 00:15:09.544 "compare_and_write": false, 00:15:09.544 "abort": false, 00:15:09.544 "seek_hole": false, 00:15:09.544 "seek_data": false, 00:15:09.544 "copy": false, 00:15:09.544 "nvme_iov_md": false 00:15:09.544 }, 00:15:09.544 "memory_domains": [ 00:15:09.544 { 00:15:09.544 "dma_device_id": "system", 00:15:09.544 "dma_device_type": 1 00:15:09.544 }, 00:15:09.544 { 00:15:09.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.544 "dma_device_type": 2 00:15:09.544 }, 00:15:09.544 { 00:15:09.544 "dma_device_id": "system", 00:15:09.544 "dma_device_type": 1 00:15:09.544 }, 00:15:09.544 { 00:15:09.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.544 "dma_device_type": 2 00:15:09.544 }, 00:15:09.544 { 00:15:09.544 "dma_device_id": "system", 00:15:09.544 "dma_device_type": 1 00:15:09.544 }, 00:15:09.544 { 00:15:09.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.544 "dma_device_type": 2 00:15:09.544 } 00:15:09.544 ], 00:15:09.544 "driver_specific": { 00:15:09.544 "raid": { 00:15:09.544 "uuid": "d3b19b6b-5993-45dc-9de8-2c0197b48f53", 00:15:09.544 "strip_size_kb": 64, 00:15:09.544 "state": "online", 00:15:09.544 "raid_level": "concat", 00:15:09.544 "superblock": true, 00:15:09.544 "num_base_bdevs": 3, 00:15:09.544 "num_base_bdevs_discovered": 3, 00:15:09.544 "num_base_bdevs_operational": 3, 00:15:09.544 "base_bdevs_list": [ 00:15:09.544 { 00:15:09.544 "name": "pt1", 00:15:09.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.544 "is_configured": true, 00:15:09.544 "data_offset": 2048, 00:15:09.544 "data_size": 63488 00:15:09.544 }, 00:15:09.544 { 00:15:09.544 "name": "pt2", 00:15:09.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.544 "is_configured": true, 00:15:09.544 "data_offset": 2048, 00:15:09.544 "data_size": 63488 00:15:09.544 }, 00:15:09.544 { 00:15:09.544 "name": "pt3", 00:15:09.544 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.544 "is_configured": true, 00:15:09.544 "data_offset": 2048, 00:15:09.544 "data_size": 63488 00:15:09.544 } 00:15:09.544 ] 00:15:09.544 } 00:15:09.544 } 00:15:09.544 }' 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:09.544 pt2 00:15:09.544 pt3' 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.544 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.545 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:09.803 [2024-12-06 13:09:56.600484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d3b19b6b-5993-45dc-9de8-2c0197b48f53 '!=' d3b19b6b-5993-45dc-9de8-2c0197b48f53 ']' 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67142 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67142 ']' 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67142 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.803 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67142 00:15:09.804 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.804 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.804 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67142' 00:15:09.804 killing process with pid 67142 00:15:09.804 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67142 00:15:09.804 [2024-12-06 13:09:56.682880] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.804 13:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67142 00:15:09.804 [2024-12-06 13:09:56.683304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.804 [2024-12-06 13:09:56.683659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.804 [2024-12-06 13:09:56.683834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:10.062 [2024-12-06 13:09:56.994180] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.436 13:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:11.436 00:15:11.436 real 0m5.884s 00:15:11.436 user 0m8.779s 00:15:11.436 sys 0m0.918s 00:15:11.436 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.436 ************************************ 00:15:11.436 END TEST raid_superblock_test 00:15:11.436 ************************************ 00:15:11.436 13:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.436 13:09:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:15:11.436 13:09:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:11.436 13:09:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.436 13:09:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:11.436 ************************************ 00:15:11.436 START TEST raid_read_error_test 00:15:11.436 ************************************ 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:11.436 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eTZTqKDKJU 00:15:11.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67403 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67403 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67403 ']' 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.437 13:09:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.437 [2024-12-06 13:09:58.242305] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:15:11.437 [2024-12-06 13:09:58.243097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67403 ] 00:15:11.437 [2024-12-06 13:09:58.427217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.695 [2024-12-06 13:09:58.576880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.954 [2024-12-06 13:09:58.788415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.954 [2024-12-06 13:09:58.788620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.521 BaseBdev1_malloc 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.521 true 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.521 [2024-12-06 13:09:59.344209] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:12.521 [2024-12-06 13:09:59.344354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.521 [2024-12-06 13:09:59.344386] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:12.521 [2024-12-06 13:09:59.344404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.521 [2024-12-06 13:09:59.347192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.521 [2024-12-06 13:09:59.347258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:12.521 BaseBdev1 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:12.521 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.522 BaseBdev2_malloc 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.522 true 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.522 [2024-12-06 13:09:59.404506] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:12.522 [2024-12-06 13:09:59.404589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.522 [2024-12-06 13:09:59.404616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:12.522 [2024-12-06 13:09:59.404633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.522 [2024-12-06 13:09:59.407402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.522 [2024-12-06 13:09:59.407453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:12.522 BaseBdev2 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.522 BaseBdev3_malloc 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.522 true 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.522 [2024-12-06 13:09:59.476616] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:12.522 [2024-12-06 13:09:59.476696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.522 [2024-12-06 13:09:59.476723] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:12.522 [2024-12-06 13:09:59.476741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.522 [2024-12-06 13:09:59.479584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.522 [2024-12-06 13:09:59.479769] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:12.522 BaseBdev3 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.522 [2024-12-06 13:09:59.488773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.522 [2024-12-06 13:09:59.491418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.522 [2024-12-06 13:09:59.491673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.522 [2024-12-06 13:09:59.492004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:12.522 [2024-12-06 13:09:59.492137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:12.522 [2024-12-06 13:09:59.492523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:12.522 [2024-12-06 13:09:59.492858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:12.522 [2024-12-06 13:09:59.492892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:12.522 [2024-12-06 13:09:59.493129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.522 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.780 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.780 "name": "raid_bdev1", 00:15:12.780 "uuid": "2d38da77-cd45-4ffe-90c0-71ba26c60582", 00:15:12.780 "strip_size_kb": 64, 00:15:12.780 "state": "online", 00:15:12.780 "raid_level": "concat", 00:15:12.780 "superblock": true, 00:15:12.780 "num_base_bdevs": 3, 00:15:12.780 "num_base_bdevs_discovered": 3, 00:15:12.780 "num_base_bdevs_operational": 3, 00:15:12.780 "base_bdevs_list": [ 00:15:12.780 { 00:15:12.780 "name": "BaseBdev1", 00:15:12.780 "uuid": "13a0862b-4e37-5fa6-b16d-ca78106f8ae5", 00:15:12.781 "is_configured": true, 00:15:12.781 "data_offset": 2048, 00:15:12.781 "data_size": 63488 00:15:12.781 }, 00:15:12.781 { 00:15:12.781 "name": "BaseBdev2", 00:15:12.781 "uuid": "56f72c70-2f6f-5cab-b761-47426dd934de", 00:15:12.781 "is_configured": true, 00:15:12.781 "data_offset": 2048, 00:15:12.781 "data_size": 63488 00:15:12.781 }, 00:15:12.781 { 00:15:12.781 "name": "BaseBdev3", 00:15:12.781 "uuid": "b4d91b00-51a7-5102-8f2c-7d7e6a2405f5", 00:15:12.781 "is_configured": true, 00:15:12.781 "data_offset": 2048, 00:15:12.781 "data_size": 63488 00:15:12.781 } 00:15:12.781 ] 00:15:12.781 }' 00:15:12.781 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.781 13:09:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.038 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:13.038 13:09:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:13.300 [2024-12-06 13:10:00.106752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:14.244 13:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:14.244 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.244 13:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.244 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.244 "name": "raid_bdev1", 00:15:14.244 "uuid": "2d38da77-cd45-4ffe-90c0-71ba26c60582", 00:15:14.244 "strip_size_kb": 64, 00:15:14.244 "state": "online", 00:15:14.244 "raid_level": "concat", 00:15:14.244 "superblock": true, 00:15:14.244 "num_base_bdevs": 3, 00:15:14.244 "num_base_bdevs_discovered": 3, 00:15:14.245 "num_base_bdevs_operational": 3, 00:15:14.245 "base_bdevs_list": [ 00:15:14.245 { 00:15:14.245 "name": "BaseBdev1", 00:15:14.245 "uuid": "13a0862b-4e37-5fa6-b16d-ca78106f8ae5", 00:15:14.245 "is_configured": true, 00:15:14.245 "data_offset": 2048, 00:15:14.245 "data_size": 63488 00:15:14.245 }, 00:15:14.245 { 00:15:14.245 "name": "BaseBdev2", 00:15:14.245 "uuid": "56f72c70-2f6f-5cab-b761-47426dd934de", 00:15:14.245 "is_configured": true, 00:15:14.245 "data_offset": 2048, 00:15:14.245 "data_size": 63488 00:15:14.245 }, 00:15:14.245 { 00:15:14.245 "name": "BaseBdev3", 00:15:14.245 "uuid": "b4d91b00-51a7-5102-8f2c-7d7e6a2405f5", 00:15:14.245 "is_configured": true, 00:15:14.245 "data_offset": 2048, 00:15:14.245 "data_size": 63488 00:15:14.245 } 00:15:14.245 ] 00:15:14.245 }' 00:15:14.245 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.245 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.811 [2024-12-06 13:10:01.530276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.811 [2024-12-06 13:10:01.530335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.811 [2024-12-06 13:10:01.533769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.811 [2024-12-06 13:10:01.533835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.811 [2024-12-06 13:10:01.533889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.811 [2024-12-06 13:10:01.533904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:14.811 { 00:15:14.811 "results": [ 00:15:14.811 { 00:15:14.811 "job": "raid_bdev1", 00:15:14.811 "core_mask": "0x1", 00:15:14.811 "workload": "randrw", 00:15:14.811 "percentage": 50, 00:15:14.811 "status": "finished", 00:15:14.811 "queue_depth": 1, 00:15:14.811 "io_size": 131072, 00:15:14.811 "runtime": 1.421259, 00:15:14.811 "iops": 10514.621191492895, 00:15:14.811 "mibps": 1314.327648936612, 00:15:14.811 "io_failed": 1, 00:15:14.811 "io_timeout": 0, 00:15:14.811 "avg_latency_us": 132.23108148058031, 00:15:14.811 "min_latency_us": 43.054545454545455, 00:15:14.811 "max_latency_us": 1839.4763636363637 00:15:14.811 } 00:15:14.811 ], 00:15:14.811 "core_count": 1 00:15:14.811 } 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67403 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67403 ']' 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67403 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67403 00:15:14.811 killing process with pid 67403 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67403' 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67403 00:15:14.811 13:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67403 00:15:14.811 [2024-12-06 13:10:01.570275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.811 [2024-12-06 13:10:01.780697] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.185 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eTZTqKDKJU 00:15:16.185 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:16.185 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:16.185 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:15:16.185 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:16.185 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:16.185 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:16.185 13:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:15:16.185 00:15:16.185 real 0m4.810s 00:15:16.185 user 0m5.951s 00:15:16.185 sys 0m0.623s 00:15:16.185 13:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.185 ************************************ 00:15:16.185 END TEST raid_read_error_test 00:15:16.185 ************************************ 00:15:16.185 13:10:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.185 13:10:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:15:16.185 13:10:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:16.185 13:10:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.185 13:10:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:16.185 ************************************ 00:15:16.185 START TEST raid_write_error_test 00:15:16.185 ************************************ 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iWswZoELTx 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67549 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67549 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67549 ']' 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.185 13:10:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.185 [2024-12-06 13:10:03.098051] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:15:16.185 [2024-12-06 13:10:03.098252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67549 ] 00:15:16.444 [2024-12-06 13:10:03.291648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.444 [2024-12-06 13:10:03.440557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.702 [2024-12-06 13:10:03.642997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.702 [2024-12-06 13:10:03.643060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.269 BaseBdev1_malloc 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.269 true 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.269 [2024-12-06 13:10:04.167311] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:17.269 [2024-12-06 13:10:04.167398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.269 [2024-12-06 13:10:04.167436] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:17.269 [2024-12-06 13:10:04.167455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.269 [2024-12-06 13:10:04.170364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.269 [2024-12-06 13:10:04.170415] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:17.269 BaseBdev1 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.269 BaseBdev2_malloc 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.269 true 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.269 [2024-12-06 13:10:04.231542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:17.269 [2024-12-06 13:10:04.231631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.269 [2024-12-06 13:10:04.231657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:17.269 [2024-12-06 13:10:04.231675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.269 [2024-12-06 13:10:04.234560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.269 [2024-12-06 13:10:04.234609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:17.269 BaseBdev2 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.269 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.545 BaseBdev3_malloc 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.545 true 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.545 [2024-12-06 13:10:04.314680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:17.545 [2024-12-06 13:10:04.314755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.545 [2024-12-06 13:10:04.314783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:17.545 [2024-12-06 13:10:04.314802] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.545 [2024-12-06 13:10:04.317811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.545 [2024-12-06 13:10:04.317863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:17.545 BaseBdev3 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.545 [2024-12-06 13:10:04.322841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.545 [2024-12-06 13:10:04.325482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.545 [2024-12-06 13:10:04.325604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:17.545 [2024-12-06 13:10:04.325875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:17.545 [2024-12-06 13:10:04.325895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:17.545 [2024-12-06 13:10:04.326214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:17.545 [2024-12-06 13:10:04.326442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:17.545 [2024-12-06 13:10:04.326466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:17.545 [2024-12-06 13:10:04.326889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.545 "name": "raid_bdev1", 00:15:17.545 "uuid": "4da06087-5ab4-43be-b0e9-53a04977fdb3", 00:15:17.545 "strip_size_kb": 64, 00:15:17.545 "state": "online", 00:15:17.545 "raid_level": "concat", 00:15:17.545 "superblock": true, 00:15:17.545 "num_base_bdevs": 3, 00:15:17.545 "num_base_bdevs_discovered": 3, 00:15:17.545 "num_base_bdevs_operational": 3, 00:15:17.545 "base_bdevs_list": [ 00:15:17.545 { 00:15:17.545 "name": "BaseBdev1", 00:15:17.545 "uuid": "4b5a9e4f-d151-5564-a0e8-7310e77f4952", 00:15:17.545 "is_configured": true, 00:15:17.545 "data_offset": 2048, 00:15:17.545 "data_size": 63488 00:15:17.545 }, 00:15:17.545 { 00:15:17.545 "name": "BaseBdev2", 00:15:17.545 "uuid": "3234444d-d003-58a0-a863-1c49fb385531", 00:15:17.545 "is_configured": true, 00:15:17.545 "data_offset": 2048, 00:15:17.545 "data_size": 63488 00:15:17.545 }, 00:15:17.545 { 00:15:17.545 "name": "BaseBdev3", 00:15:17.545 "uuid": "77a93830-b268-5dfb-a6ae-80ab342fd9f6", 00:15:17.545 "is_configured": true, 00:15:17.545 "data_offset": 2048, 00:15:17.545 "data_size": 63488 00:15:17.545 } 00:15:17.545 ] 00:15:17.545 }' 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.545 13:10:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.113 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:18.113 13:10:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:18.113 [2024-12-06 13:10:05.000434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.046 "name": "raid_bdev1", 00:15:19.046 "uuid": "4da06087-5ab4-43be-b0e9-53a04977fdb3", 00:15:19.046 "strip_size_kb": 64, 00:15:19.046 "state": "online", 00:15:19.046 "raid_level": "concat", 00:15:19.046 "superblock": true, 00:15:19.046 "num_base_bdevs": 3, 00:15:19.046 "num_base_bdevs_discovered": 3, 00:15:19.046 "num_base_bdevs_operational": 3, 00:15:19.046 "base_bdevs_list": [ 00:15:19.046 { 00:15:19.046 "name": "BaseBdev1", 00:15:19.046 "uuid": "4b5a9e4f-d151-5564-a0e8-7310e77f4952", 00:15:19.046 "is_configured": true, 00:15:19.046 "data_offset": 2048, 00:15:19.046 "data_size": 63488 00:15:19.046 }, 00:15:19.046 { 00:15:19.046 "name": "BaseBdev2", 00:15:19.046 "uuid": "3234444d-d003-58a0-a863-1c49fb385531", 00:15:19.046 "is_configured": true, 00:15:19.046 "data_offset": 2048, 00:15:19.046 "data_size": 63488 00:15:19.046 }, 00:15:19.046 { 00:15:19.046 "name": "BaseBdev3", 00:15:19.046 "uuid": "77a93830-b268-5dfb-a6ae-80ab342fd9f6", 00:15:19.046 "is_configured": true, 00:15:19.046 "data_offset": 2048, 00:15:19.046 "data_size": 63488 00:15:19.046 } 00:15:19.046 ] 00:15:19.046 }' 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.046 13:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.628 [2024-12-06 13:10:06.392058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.628 [2024-12-06 13:10:06.392319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.628 [2024-12-06 13:10:06.395911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.628 [2024-12-06 13:10:06.396162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.628 [2024-12-06 13:10:06.396369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.628 [2024-12-06 13:10:06.396561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, sta{ 00:15:19.628 "results": [ 00:15:19.628 { 00:15:19.628 "job": "raid_bdev1", 00:15:19.628 "core_mask": "0x1", 00:15:19.628 "workload": "randrw", 00:15:19.628 "percentage": 50, 00:15:19.628 "status": "finished", 00:15:19.628 "queue_depth": 1, 00:15:19.628 "io_size": 131072, 00:15:19.628 "runtime": 1.38964, 00:15:19.628 "iops": 10620.73630580582, 00:15:19.628 "mibps": 1327.5920382257275, 00:15:19.628 "io_failed": 1, 00:15:19.628 "io_timeout": 0, 00:15:19.628 "avg_latency_us": 130.86294752402068, 00:15:19.628 "min_latency_us": 40.96, 00:15:19.628 "max_latency_us": 1869.2654545454545 00:15:19.628 } 00:15:19.628 ], 00:15:19.628 "core_count": 1 00:15:19.628 } 00:15:19.628 te offline 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67549 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67549 ']' 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67549 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67549 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67549' 00:15:19.628 killing process with pid 67549 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67549 00:15:19.628 [2024-12-06 13:10:06.438117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.628 13:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67549 00:15:19.628 [2024-12-06 13:10:06.640577] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.011 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iWswZoELTx 00:15:21.011 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:21.011 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:21.011 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:15:21.011 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:21.011 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:21.011 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:21.011 13:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:15:21.011 00:15:21.011 real 0m4.778s 00:15:21.011 user 0m5.933s 00:15:21.011 sys 0m0.614s 00:15:21.011 ************************************ 00:15:21.011 END TEST raid_write_error_test 00:15:21.011 ************************************ 00:15:21.011 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.011 13:10:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.011 13:10:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:21.011 13:10:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:15:21.011 13:10:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:21.011 13:10:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.011 13:10:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.011 ************************************ 00:15:21.011 START TEST raid_state_function_test 00:15:21.011 ************************************ 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:21.011 Process raid pid: 67693 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67693 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67693' 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67693 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67693 ']' 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.011 13:10:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.011 [2024-12-06 13:10:07.899265] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:15:21.011 [2024-12-06 13:10:07.899412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.269 [2024-12-06 13:10:08.075191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.269 [2024-12-06 13:10:08.205470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.527 [2024-12-06 13:10:08.411957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.527 [2024-12-06 13:10:08.412005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.095 [2024-12-06 13:10:08.891984] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.095 [2024-12-06 13:10:08.892062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.095 [2024-12-06 13:10:08.892079] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.095 [2024-12-06 13:10:08.892096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.095 [2024-12-06 13:10:08.892106] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.095 [2024-12-06 13:10:08.892120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.095 "name": "Existed_Raid", 00:15:22.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.095 "strip_size_kb": 0, 00:15:22.095 "state": "configuring", 00:15:22.095 "raid_level": "raid1", 00:15:22.095 "superblock": false, 00:15:22.095 "num_base_bdevs": 3, 00:15:22.095 "num_base_bdevs_discovered": 0, 00:15:22.095 "num_base_bdevs_operational": 3, 00:15:22.095 "base_bdevs_list": [ 00:15:22.095 { 00:15:22.095 "name": "BaseBdev1", 00:15:22.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.095 "is_configured": false, 00:15:22.095 "data_offset": 0, 00:15:22.095 "data_size": 0 00:15:22.095 }, 00:15:22.095 { 00:15:22.095 "name": "BaseBdev2", 00:15:22.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.095 "is_configured": false, 00:15:22.095 "data_offset": 0, 00:15:22.095 "data_size": 0 00:15:22.095 }, 00:15:22.095 { 00:15:22.095 "name": "BaseBdev3", 00:15:22.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.095 "is_configured": false, 00:15:22.095 "data_offset": 0, 00:15:22.095 "data_size": 0 00:15:22.095 } 00:15:22.095 ] 00:15:22.095 }' 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.095 13:10:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.670 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.671 [2024-12-06 13:10:09.376104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.671 [2024-12-06 13:10:09.376153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.671 [2024-12-06 13:10:09.384071] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.671 [2024-12-06 13:10:09.384121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.671 [2024-12-06 13:10:09.384135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.671 [2024-12-06 13:10:09.384150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.671 [2024-12-06 13:10:09.384160] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.671 [2024-12-06 13:10:09.384173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.671 [2024-12-06 13:10:09.428975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.671 BaseBdev1 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.671 [ 00:15:22.671 { 00:15:22.671 "name": "BaseBdev1", 00:15:22.671 "aliases": [ 00:15:22.671 "2702c394-9543-4a09-8a11-6238b416f530" 00:15:22.671 ], 00:15:22.671 "product_name": "Malloc disk", 00:15:22.671 "block_size": 512, 00:15:22.671 "num_blocks": 65536, 00:15:22.671 "uuid": "2702c394-9543-4a09-8a11-6238b416f530", 00:15:22.671 "assigned_rate_limits": { 00:15:22.671 "rw_ios_per_sec": 0, 00:15:22.671 "rw_mbytes_per_sec": 0, 00:15:22.671 "r_mbytes_per_sec": 0, 00:15:22.671 "w_mbytes_per_sec": 0 00:15:22.671 }, 00:15:22.671 "claimed": true, 00:15:22.671 "claim_type": "exclusive_write", 00:15:22.671 "zoned": false, 00:15:22.671 "supported_io_types": { 00:15:22.671 "read": true, 00:15:22.671 "write": true, 00:15:22.671 "unmap": true, 00:15:22.671 "flush": true, 00:15:22.671 "reset": true, 00:15:22.671 "nvme_admin": false, 00:15:22.671 "nvme_io": false, 00:15:22.671 "nvme_io_md": false, 00:15:22.671 "write_zeroes": true, 00:15:22.671 "zcopy": true, 00:15:22.671 "get_zone_info": false, 00:15:22.671 "zone_management": false, 00:15:22.671 "zone_append": false, 00:15:22.671 "compare": false, 00:15:22.671 "compare_and_write": false, 00:15:22.671 "abort": true, 00:15:22.671 "seek_hole": false, 00:15:22.671 "seek_data": false, 00:15:22.671 "copy": true, 00:15:22.671 "nvme_iov_md": false 00:15:22.671 }, 00:15:22.671 "memory_domains": [ 00:15:22.671 { 00:15:22.671 "dma_device_id": "system", 00:15:22.671 "dma_device_type": 1 00:15:22.671 }, 00:15:22.671 { 00:15:22.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.671 "dma_device_type": 2 00:15:22.671 } 00:15:22.671 ], 00:15:22.671 "driver_specific": {} 00:15:22.671 } 00:15:22.671 ] 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.671 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.672 "name": "Existed_Raid", 00:15:22.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.672 "strip_size_kb": 0, 00:15:22.672 "state": "configuring", 00:15:22.672 "raid_level": "raid1", 00:15:22.672 "superblock": false, 00:15:22.672 "num_base_bdevs": 3, 00:15:22.672 "num_base_bdevs_discovered": 1, 00:15:22.672 "num_base_bdevs_operational": 3, 00:15:22.672 "base_bdevs_list": [ 00:15:22.672 { 00:15:22.672 "name": "BaseBdev1", 00:15:22.672 "uuid": "2702c394-9543-4a09-8a11-6238b416f530", 00:15:22.672 "is_configured": true, 00:15:22.672 "data_offset": 0, 00:15:22.672 "data_size": 65536 00:15:22.672 }, 00:15:22.672 { 00:15:22.672 "name": "BaseBdev2", 00:15:22.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.672 "is_configured": false, 00:15:22.672 "data_offset": 0, 00:15:22.672 "data_size": 0 00:15:22.672 }, 00:15:22.672 { 00:15:22.672 "name": "BaseBdev3", 00:15:22.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.672 "is_configured": false, 00:15:22.672 "data_offset": 0, 00:15:22.672 "data_size": 0 00:15:22.672 } 00:15:22.672 ] 00:15:22.672 }' 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.672 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.241 [2024-12-06 13:10:09.977195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.241 [2024-12-06 13:10:09.977273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.241 [2024-12-06 13:10:09.985204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.241 [2024-12-06 13:10:09.987594] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.241 [2024-12-06 13:10:09.987640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.241 [2024-12-06 13:10:09.987655] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.241 [2024-12-06 13:10:09.987669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.241 13:10:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.241 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.241 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.241 "name": "Existed_Raid", 00:15:23.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.241 "strip_size_kb": 0, 00:15:23.241 "state": "configuring", 00:15:23.241 "raid_level": "raid1", 00:15:23.241 "superblock": false, 00:15:23.241 "num_base_bdevs": 3, 00:15:23.241 "num_base_bdevs_discovered": 1, 00:15:23.241 "num_base_bdevs_operational": 3, 00:15:23.241 "base_bdevs_list": [ 00:15:23.241 { 00:15:23.241 "name": "BaseBdev1", 00:15:23.241 "uuid": "2702c394-9543-4a09-8a11-6238b416f530", 00:15:23.241 "is_configured": true, 00:15:23.241 "data_offset": 0, 00:15:23.241 "data_size": 65536 00:15:23.241 }, 00:15:23.241 { 00:15:23.241 "name": "BaseBdev2", 00:15:23.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.241 "is_configured": false, 00:15:23.241 "data_offset": 0, 00:15:23.241 "data_size": 0 00:15:23.241 }, 00:15:23.241 { 00:15:23.241 "name": "BaseBdev3", 00:15:23.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.241 "is_configured": false, 00:15:23.241 "data_offset": 0, 00:15:23.241 "data_size": 0 00:15:23.241 } 00:15:23.241 ] 00:15:23.241 }' 00:15:23.241 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.241 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.809 [2024-12-06 13:10:10.555207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.809 BaseBdev2 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.809 [ 00:15:23.809 { 00:15:23.809 "name": "BaseBdev2", 00:15:23.809 "aliases": [ 00:15:23.809 "2f1f0d04-6834-4025-ae5b-87aec58be67d" 00:15:23.809 ], 00:15:23.809 "product_name": "Malloc disk", 00:15:23.809 "block_size": 512, 00:15:23.809 "num_blocks": 65536, 00:15:23.809 "uuid": "2f1f0d04-6834-4025-ae5b-87aec58be67d", 00:15:23.809 "assigned_rate_limits": { 00:15:23.809 "rw_ios_per_sec": 0, 00:15:23.809 "rw_mbytes_per_sec": 0, 00:15:23.809 "r_mbytes_per_sec": 0, 00:15:23.809 "w_mbytes_per_sec": 0 00:15:23.809 }, 00:15:23.809 "claimed": true, 00:15:23.809 "claim_type": "exclusive_write", 00:15:23.809 "zoned": false, 00:15:23.809 "supported_io_types": { 00:15:23.809 "read": true, 00:15:23.809 "write": true, 00:15:23.809 "unmap": true, 00:15:23.809 "flush": true, 00:15:23.809 "reset": true, 00:15:23.809 "nvme_admin": false, 00:15:23.809 "nvme_io": false, 00:15:23.809 "nvme_io_md": false, 00:15:23.809 "write_zeroes": true, 00:15:23.809 "zcopy": true, 00:15:23.809 "get_zone_info": false, 00:15:23.809 "zone_management": false, 00:15:23.809 "zone_append": false, 00:15:23.809 "compare": false, 00:15:23.809 "compare_and_write": false, 00:15:23.809 "abort": true, 00:15:23.809 "seek_hole": false, 00:15:23.809 "seek_data": false, 00:15:23.809 "copy": true, 00:15:23.809 "nvme_iov_md": false 00:15:23.809 }, 00:15:23.809 "memory_domains": [ 00:15:23.809 { 00:15:23.809 "dma_device_id": "system", 00:15:23.809 "dma_device_type": 1 00:15:23.809 }, 00:15:23.809 { 00:15:23.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.809 "dma_device_type": 2 00:15:23.809 } 00:15:23.809 ], 00:15:23.809 "driver_specific": {} 00:15:23.809 } 00:15:23.809 ] 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.809 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.809 "name": "Existed_Raid", 00:15:23.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.809 "strip_size_kb": 0, 00:15:23.809 "state": "configuring", 00:15:23.809 "raid_level": "raid1", 00:15:23.809 "superblock": false, 00:15:23.809 "num_base_bdevs": 3, 00:15:23.809 "num_base_bdevs_discovered": 2, 00:15:23.809 "num_base_bdevs_operational": 3, 00:15:23.809 "base_bdevs_list": [ 00:15:23.809 { 00:15:23.809 "name": "BaseBdev1", 00:15:23.809 "uuid": "2702c394-9543-4a09-8a11-6238b416f530", 00:15:23.809 "is_configured": true, 00:15:23.809 "data_offset": 0, 00:15:23.809 "data_size": 65536 00:15:23.809 }, 00:15:23.809 { 00:15:23.809 "name": "BaseBdev2", 00:15:23.809 "uuid": "2f1f0d04-6834-4025-ae5b-87aec58be67d", 00:15:23.809 "is_configured": true, 00:15:23.809 "data_offset": 0, 00:15:23.809 "data_size": 65536 00:15:23.809 }, 00:15:23.809 { 00:15:23.809 "name": "BaseBdev3", 00:15:23.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.810 "is_configured": false, 00:15:23.810 "data_offset": 0, 00:15:23.810 "data_size": 0 00:15:23.810 } 00:15:23.810 ] 00:15:23.810 }' 00:15:23.810 13:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.810 13:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.376 [2024-12-06 13:10:11.139977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.376 [2024-12-06 13:10:11.140042] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:24.376 [2024-12-06 13:10:11.140063] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:24.376 [2024-12-06 13:10:11.140400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:24.376 [2024-12-06 13:10:11.140652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:24.376 [2024-12-06 13:10:11.140678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:24.376 [2024-12-06 13:10:11.140995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.376 BaseBdev3 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.376 [ 00:15:24.376 { 00:15:24.376 "name": "BaseBdev3", 00:15:24.376 "aliases": [ 00:15:24.376 "42459cf3-8c51-4211-8362-f395f4284004" 00:15:24.376 ], 00:15:24.376 "product_name": "Malloc disk", 00:15:24.376 "block_size": 512, 00:15:24.376 "num_blocks": 65536, 00:15:24.376 "uuid": "42459cf3-8c51-4211-8362-f395f4284004", 00:15:24.376 "assigned_rate_limits": { 00:15:24.376 "rw_ios_per_sec": 0, 00:15:24.376 "rw_mbytes_per_sec": 0, 00:15:24.376 "r_mbytes_per_sec": 0, 00:15:24.376 "w_mbytes_per_sec": 0 00:15:24.376 }, 00:15:24.376 "claimed": true, 00:15:24.376 "claim_type": "exclusive_write", 00:15:24.376 "zoned": false, 00:15:24.376 "supported_io_types": { 00:15:24.376 "read": true, 00:15:24.376 "write": true, 00:15:24.376 "unmap": true, 00:15:24.376 "flush": true, 00:15:24.376 "reset": true, 00:15:24.376 "nvme_admin": false, 00:15:24.376 "nvme_io": false, 00:15:24.376 "nvme_io_md": false, 00:15:24.376 "write_zeroes": true, 00:15:24.376 "zcopy": true, 00:15:24.376 "get_zone_info": false, 00:15:24.376 "zone_management": false, 00:15:24.376 "zone_append": false, 00:15:24.376 "compare": false, 00:15:24.376 "compare_and_write": false, 00:15:24.376 "abort": true, 00:15:24.376 "seek_hole": false, 00:15:24.376 "seek_data": false, 00:15:24.376 "copy": true, 00:15:24.376 "nvme_iov_md": false 00:15:24.376 }, 00:15:24.376 "memory_domains": [ 00:15:24.376 { 00:15:24.376 "dma_device_id": "system", 00:15:24.376 "dma_device_type": 1 00:15:24.376 }, 00:15:24.376 { 00:15:24.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.376 "dma_device_type": 2 00:15:24.376 } 00:15:24.376 ], 00:15:24.376 "driver_specific": {} 00:15:24.376 } 00:15:24.376 ] 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.376 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.376 "name": "Existed_Raid", 00:15:24.376 "uuid": "3bfc05df-48cb-4445-9a63-2c90c27c5d11", 00:15:24.376 "strip_size_kb": 0, 00:15:24.377 "state": "online", 00:15:24.377 "raid_level": "raid1", 00:15:24.377 "superblock": false, 00:15:24.377 "num_base_bdevs": 3, 00:15:24.377 "num_base_bdevs_discovered": 3, 00:15:24.377 "num_base_bdevs_operational": 3, 00:15:24.377 "base_bdevs_list": [ 00:15:24.377 { 00:15:24.377 "name": "BaseBdev1", 00:15:24.377 "uuid": "2702c394-9543-4a09-8a11-6238b416f530", 00:15:24.377 "is_configured": true, 00:15:24.377 "data_offset": 0, 00:15:24.377 "data_size": 65536 00:15:24.377 }, 00:15:24.377 { 00:15:24.377 "name": "BaseBdev2", 00:15:24.377 "uuid": "2f1f0d04-6834-4025-ae5b-87aec58be67d", 00:15:24.377 "is_configured": true, 00:15:24.377 "data_offset": 0, 00:15:24.377 "data_size": 65536 00:15:24.377 }, 00:15:24.377 { 00:15:24.377 "name": "BaseBdev3", 00:15:24.377 "uuid": "42459cf3-8c51-4211-8362-f395f4284004", 00:15:24.377 "is_configured": true, 00:15:24.377 "data_offset": 0, 00:15:24.377 "data_size": 65536 00:15:24.377 } 00:15:24.377 ] 00:15:24.377 }' 00:15:24.377 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.377 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:24.943 [2024-12-06 13:10:11.728602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:24.943 "name": "Existed_Raid", 00:15:24.943 "aliases": [ 00:15:24.943 "3bfc05df-48cb-4445-9a63-2c90c27c5d11" 00:15:24.943 ], 00:15:24.943 "product_name": "Raid Volume", 00:15:24.943 "block_size": 512, 00:15:24.943 "num_blocks": 65536, 00:15:24.943 "uuid": "3bfc05df-48cb-4445-9a63-2c90c27c5d11", 00:15:24.943 "assigned_rate_limits": { 00:15:24.943 "rw_ios_per_sec": 0, 00:15:24.943 "rw_mbytes_per_sec": 0, 00:15:24.943 "r_mbytes_per_sec": 0, 00:15:24.943 "w_mbytes_per_sec": 0 00:15:24.943 }, 00:15:24.943 "claimed": false, 00:15:24.943 "zoned": false, 00:15:24.943 "supported_io_types": { 00:15:24.943 "read": true, 00:15:24.943 "write": true, 00:15:24.943 "unmap": false, 00:15:24.943 "flush": false, 00:15:24.943 "reset": true, 00:15:24.943 "nvme_admin": false, 00:15:24.943 "nvme_io": false, 00:15:24.943 "nvme_io_md": false, 00:15:24.943 "write_zeroes": true, 00:15:24.943 "zcopy": false, 00:15:24.943 "get_zone_info": false, 00:15:24.943 "zone_management": false, 00:15:24.943 "zone_append": false, 00:15:24.943 "compare": false, 00:15:24.943 "compare_and_write": false, 00:15:24.943 "abort": false, 00:15:24.943 "seek_hole": false, 00:15:24.943 "seek_data": false, 00:15:24.943 "copy": false, 00:15:24.943 "nvme_iov_md": false 00:15:24.943 }, 00:15:24.943 "memory_domains": [ 00:15:24.943 { 00:15:24.943 "dma_device_id": "system", 00:15:24.943 "dma_device_type": 1 00:15:24.943 }, 00:15:24.943 { 00:15:24.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.943 "dma_device_type": 2 00:15:24.943 }, 00:15:24.943 { 00:15:24.943 "dma_device_id": "system", 00:15:24.943 "dma_device_type": 1 00:15:24.943 }, 00:15:24.943 { 00:15:24.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.943 "dma_device_type": 2 00:15:24.943 }, 00:15:24.943 { 00:15:24.943 "dma_device_id": "system", 00:15:24.943 "dma_device_type": 1 00:15:24.943 }, 00:15:24.943 { 00:15:24.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.943 "dma_device_type": 2 00:15:24.943 } 00:15:24.943 ], 00:15:24.943 "driver_specific": { 00:15:24.943 "raid": { 00:15:24.943 "uuid": "3bfc05df-48cb-4445-9a63-2c90c27c5d11", 00:15:24.943 "strip_size_kb": 0, 00:15:24.943 "state": "online", 00:15:24.943 "raid_level": "raid1", 00:15:24.943 "superblock": false, 00:15:24.943 "num_base_bdevs": 3, 00:15:24.943 "num_base_bdevs_discovered": 3, 00:15:24.943 "num_base_bdevs_operational": 3, 00:15:24.943 "base_bdevs_list": [ 00:15:24.943 { 00:15:24.943 "name": "BaseBdev1", 00:15:24.943 "uuid": "2702c394-9543-4a09-8a11-6238b416f530", 00:15:24.943 "is_configured": true, 00:15:24.943 "data_offset": 0, 00:15:24.943 "data_size": 65536 00:15:24.943 }, 00:15:24.943 { 00:15:24.943 "name": "BaseBdev2", 00:15:24.943 "uuid": "2f1f0d04-6834-4025-ae5b-87aec58be67d", 00:15:24.943 "is_configured": true, 00:15:24.943 "data_offset": 0, 00:15:24.943 "data_size": 65536 00:15:24.943 }, 00:15:24.943 { 00:15:24.943 "name": "BaseBdev3", 00:15:24.943 "uuid": "42459cf3-8c51-4211-8362-f395f4284004", 00:15:24.943 "is_configured": true, 00:15:24.943 "data_offset": 0, 00:15:24.943 "data_size": 65536 00:15:24.943 } 00:15:24.943 ] 00:15:24.943 } 00:15:24.943 } 00:15:24.943 }' 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:24.943 BaseBdev2 00:15:24.943 BaseBdev3' 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.943 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.202 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.202 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.202 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.202 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.202 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:25.202 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.202 13:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.202 13:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.202 [2024-12-06 13:10:12.048327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.202 "name": "Existed_Raid", 00:15:25.202 "uuid": "3bfc05df-48cb-4445-9a63-2c90c27c5d11", 00:15:25.202 "strip_size_kb": 0, 00:15:25.202 "state": "online", 00:15:25.202 "raid_level": "raid1", 00:15:25.202 "superblock": false, 00:15:25.202 "num_base_bdevs": 3, 00:15:25.202 "num_base_bdevs_discovered": 2, 00:15:25.202 "num_base_bdevs_operational": 2, 00:15:25.202 "base_bdevs_list": [ 00:15:25.202 { 00:15:25.202 "name": null, 00:15:25.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.202 "is_configured": false, 00:15:25.202 "data_offset": 0, 00:15:25.202 "data_size": 65536 00:15:25.202 }, 00:15:25.202 { 00:15:25.202 "name": "BaseBdev2", 00:15:25.202 "uuid": "2f1f0d04-6834-4025-ae5b-87aec58be67d", 00:15:25.202 "is_configured": true, 00:15:25.202 "data_offset": 0, 00:15:25.202 "data_size": 65536 00:15:25.202 }, 00:15:25.202 { 00:15:25.202 "name": "BaseBdev3", 00:15:25.202 "uuid": "42459cf3-8c51-4211-8362-f395f4284004", 00:15:25.202 "is_configured": true, 00:15:25.202 "data_offset": 0, 00:15:25.202 "data_size": 65536 00:15:25.202 } 00:15:25.202 ] 00:15:25.202 }' 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.202 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.769 [2024-12-06 13:10:12.678869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:25.769 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.028 [2024-12-06 13:10:12.822271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:26.028 [2024-12-06 13:10:12.822392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.028 [2024-12-06 13:10:12.905846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.028 [2024-12-06 13:10:12.905915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.028 [2024-12-06 13:10:12.905937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.028 13:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.028 BaseBdev2 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.028 [ 00:15:26.028 { 00:15:26.028 "name": "BaseBdev2", 00:15:26.028 "aliases": [ 00:15:26.028 "5c43abb2-0807-48c4-aac4-e238e31ee0df" 00:15:26.028 ], 00:15:26.028 "product_name": "Malloc disk", 00:15:26.028 "block_size": 512, 00:15:26.028 "num_blocks": 65536, 00:15:26.028 "uuid": "5c43abb2-0807-48c4-aac4-e238e31ee0df", 00:15:26.028 "assigned_rate_limits": { 00:15:26.028 "rw_ios_per_sec": 0, 00:15:26.028 "rw_mbytes_per_sec": 0, 00:15:26.028 "r_mbytes_per_sec": 0, 00:15:26.028 "w_mbytes_per_sec": 0 00:15:26.028 }, 00:15:26.028 "claimed": false, 00:15:26.028 "zoned": false, 00:15:26.028 "supported_io_types": { 00:15:26.028 "read": true, 00:15:26.028 "write": true, 00:15:26.028 "unmap": true, 00:15:26.028 "flush": true, 00:15:26.028 "reset": true, 00:15:26.028 "nvme_admin": false, 00:15:26.028 "nvme_io": false, 00:15:26.028 "nvme_io_md": false, 00:15:26.028 "write_zeroes": true, 00:15:26.028 "zcopy": true, 00:15:26.028 "get_zone_info": false, 00:15:26.028 "zone_management": false, 00:15:26.028 "zone_append": false, 00:15:26.028 "compare": false, 00:15:26.028 "compare_and_write": false, 00:15:26.028 "abort": true, 00:15:26.028 "seek_hole": false, 00:15:26.028 "seek_data": false, 00:15:26.028 "copy": true, 00:15:26.028 "nvme_iov_md": false 00:15:26.028 }, 00:15:26.028 "memory_domains": [ 00:15:26.028 { 00:15:26.028 "dma_device_id": "system", 00:15:26.028 "dma_device_type": 1 00:15:26.028 }, 00:15:26.028 { 00:15:26.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.028 "dma_device_type": 2 00:15:26.028 } 00:15:26.028 ], 00:15:26.028 "driver_specific": {} 00:15:26.028 } 00:15:26.028 ] 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.028 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.287 BaseBdev3 00:15:26.287 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.287 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:26.287 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:26.287 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.288 [ 00:15:26.288 { 00:15:26.288 "name": "BaseBdev3", 00:15:26.288 "aliases": [ 00:15:26.288 "0f819164-7cf3-4e51-80ce-26f5b221dfb7" 00:15:26.288 ], 00:15:26.288 "product_name": "Malloc disk", 00:15:26.288 "block_size": 512, 00:15:26.288 "num_blocks": 65536, 00:15:26.288 "uuid": "0f819164-7cf3-4e51-80ce-26f5b221dfb7", 00:15:26.288 "assigned_rate_limits": { 00:15:26.288 "rw_ios_per_sec": 0, 00:15:26.288 "rw_mbytes_per_sec": 0, 00:15:26.288 "r_mbytes_per_sec": 0, 00:15:26.288 "w_mbytes_per_sec": 0 00:15:26.288 }, 00:15:26.288 "claimed": false, 00:15:26.288 "zoned": false, 00:15:26.288 "supported_io_types": { 00:15:26.288 "read": true, 00:15:26.288 "write": true, 00:15:26.288 "unmap": true, 00:15:26.288 "flush": true, 00:15:26.288 "reset": true, 00:15:26.288 "nvme_admin": false, 00:15:26.288 "nvme_io": false, 00:15:26.288 "nvme_io_md": false, 00:15:26.288 "write_zeroes": true, 00:15:26.288 "zcopy": true, 00:15:26.288 "get_zone_info": false, 00:15:26.288 "zone_management": false, 00:15:26.288 "zone_append": false, 00:15:26.288 "compare": false, 00:15:26.288 "compare_and_write": false, 00:15:26.288 "abort": true, 00:15:26.288 "seek_hole": false, 00:15:26.288 "seek_data": false, 00:15:26.288 "copy": true, 00:15:26.288 "nvme_iov_md": false 00:15:26.288 }, 00:15:26.288 "memory_domains": [ 00:15:26.288 { 00:15:26.288 "dma_device_id": "system", 00:15:26.288 "dma_device_type": 1 00:15:26.288 }, 00:15:26.288 { 00:15:26.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.288 "dma_device_type": 2 00:15:26.288 } 00:15:26.288 ], 00:15:26.288 "driver_specific": {} 00:15:26.288 } 00:15:26.288 ] 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.288 [2024-12-06 13:10:13.098504] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.288 [2024-12-06 13:10:13.098564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.288 [2024-12-06 13:10:13.098591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.288 [2024-12-06 13:10:13.101013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.288 "name": "Existed_Raid", 00:15:26.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.288 "strip_size_kb": 0, 00:15:26.288 "state": "configuring", 00:15:26.288 "raid_level": "raid1", 00:15:26.288 "superblock": false, 00:15:26.288 "num_base_bdevs": 3, 00:15:26.288 "num_base_bdevs_discovered": 2, 00:15:26.288 "num_base_bdevs_operational": 3, 00:15:26.288 "base_bdevs_list": [ 00:15:26.288 { 00:15:26.288 "name": "BaseBdev1", 00:15:26.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.288 "is_configured": false, 00:15:26.288 "data_offset": 0, 00:15:26.288 "data_size": 0 00:15:26.288 }, 00:15:26.288 { 00:15:26.288 "name": "BaseBdev2", 00:15:26.288 "uuid": "5c43abb2-0807-48c4-aac4-e238e31ee0df", 00:15:26.288 "is_configured": true, 00:15:26.288 "data_offset": 0, 00:15:26.288 "data_size": 65536 00:15:26.288 }, 00:15:26.288 { 00:15:26.288 "name": "BaseBdev3", 00:15:26.288 "uuid": "0f819164-7cf3-4e51-80ce-26f5b221dfb7", 00:15:26.288 "is_configured": true, 00:15:26.288 "data_offset": 0, 00:15:26.288 "data_size": 65536 00:15:26.288 } 00:15:26.288 ] 00:15:26.288 }' 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.288 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.855 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:26.855 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.855 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.855 [2024-12-06 13:10:13.638698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:26.855 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.855 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.856 "name": "Existed_Raid", 00:15:26.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.856 "strip_size_kb": 0, 00:15:26.856 "state": "configuring", 00:15:26.856 "raid_level": "raid1", 00:15:26.856 "superblock": false, 00:15:26.856 "num_base_bdevs": 3, 00:15:26.856 "num_base_bdevs_discovered": 1, 00:15:26.856 "num_base_bdevs_operational": 3, 00:15:26.856 "base_bdevs_list": [ 00:15:26.856 { 00:15:26.856 "name": "BaseBdev1", 00:15:26.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.856 "is_configured": false, 00:15:26.856 "data_offset": 0, 00:15:26.856 "data_size": 0 00:15:26.856 }, 00:15:26.856 { 00:15:26.856 "name": null, 00:15:26.856 "uuid": "5c43abb2-0807-48c4-aac4-e238e31ee0df", 00:15:26.856 "is_configured": false, 00:15:26.856 "data_offset": 0, 00:15:26.856 "data_size": 65536 00:15:26.856 }, 00:15:26.856 { 00:15:26.856 "name": "BaseBdev3", 00:15:26.856 "uuid": "0f819164-7cf3-4e51-80ce-26f5b221dfb7", 00:15:26.856 "is_configured": true, 00:15:26.856 "data_offset": 0, 00:15:26.856 "data_size": 65536 00:15:26.856 } 00:15:26.856 ] 00:15:26.856 }' 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.856 13:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.433 [2024-12-06 13:10:14.248580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.433 BaseBdev1 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.433 [ 00:15:27.433 { 00:15:27.433 "name": "BaseBdev1", 00:15:27.433 "aliases": [ 00:15:27.433 "ee515b5b-58bf-47e2-b761-e0aeeb203fe5" 00:15:27.433 ], 00:15:27.433 "product_name": "Malloc disk", 00:15:27.433 "block_size": 512, 00:15:27.433 "num_blocks": 65536, 00:15:27.433 "uuid": "ee515b5b-58bf-47e2-b761-e0aeeb203fe5", 00:15:27.433 "assigned_rate_limits": { 00:15:27.433 "rw_ios_per_sec": 0, 00:15:27.433 "rw_mbytes_per_sec": 0, 00:15:27.433 "r_mbytes_per_sec": 0, 00:15:27.433 "w_mbytes_per_sec": 0 00:15:27.433 }, 00:15:27.433 "claimed": true, 00:15:27.433 "claim_type": "exclusive_write", 00:15:27.433 "zoned": false, 00:15:27.433 "supported_io_types": { 00:15:27.433 "read": true, 00:15:27.433 "write": true, 00:15:27.433 "unmap": true, 00:15:27.433 "flush": true, 00:15:27.433 "reset": true, 00:15:27.433 "nvme_admin": false, 00:15:27.433 "nvme_io": false, 00:15:27.433 "nvme_io_md": false, 00:15:27.433 "write_zeroes": true, 00:15:27.433 "zcopy": true, 00:15:27.433 "get_zone_info": false, 00:15:27.433 "zone_management": false, 00:15:27.433 "zone_append": false, 00:15:27.433 "compare": false, 00:15:27.433 "compare_and_write": false, 00:15:27.433 "abort": true, 00:15:27.433 "seek_hole": false, 00:15:27.433 "seek_data": false, 00:15:27.433 "copy": true, 00:15:27.433 "nvme_iov_md": false 00:15:27.433 }, 00:15:27.433 "memory_domains": [ 00:15:27.433 { 00:15:27.433 "dma_device_id": "system", 00:15:27.433 "dma_device_type": 1 00:15:27.433 }, 00:15:27.433 { 00:15:27.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.433 "dma_device_type": 2 00:15:27.433 } 00:15:27.433 ], 00:15:27.433 "driver_specific": {} 00:15:27.433 } 00:15:27.433 ] 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.433 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.434 "name": "Existed_Raid", 00:15:27.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.434 "strip_size_kb": 0, 00:15:27.434 "state": "configuring", 00:15:27.434 "raid_level": "raid1", 00:15:27.434 "superblock": false, 00:15:27.434 "num_base_bdevs": 3, 00:15:27.434 "num_base_bdevs_discovered": 2, 00:15:27.434 "num_base_bdevs_operational": 3, 00:15:27.434 "base_bdevs_list": [ 00:15:27.434 { 00:15:27.434 "name": "BaseBdev1", 00:15:27.434 "uuid": "ee515b5b-58bf-47e2-b761-e0aeeb203fe5", 00:15:27.434 "is_configured": true, 00:15:27.434 "data_offset": 0, 00:15:27.434 "data_size": 65536 00:15:27.434 }, 00:15:27.434 { 00:15:27.434 "name": null, 00:15:27.434 "uuid": "5c43abb2-0807-48c4-aac4-e238e31ee0df", 00:15:27.434 "is_configured": false, 00:15:27.434 "data_offset": 0, 00:15:27.434 "data_size": 65536 00:15:27.434 }, 00:15:27.434 { 00:15:27.434 "name": "BaseBdev3", 00:15:27.434 "uuid": "0f819164-7cf3-4e51-80ce-26f5b221dfb7", 00:15:27.434 "is_configured": true, 00:15:27.434 "data_offset": 0, 00:15:27.434 "data_size": 65536 00:15:27.434 } 00:15:27.434 ] 00:15:27.434 }' 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.434 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.048 [2024-12-06 13:10:14.832740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.048 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.048 "name": "Existed_Raid", 00:15:28.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.048 "strip_size_kb": 0, 00:15:28.048 "state": "configuring", 00:15:28.048 "raid_level": "raid1", 00:15:28.048 "superblock": false, 00:15:28.048 "num_base_bdevs": 3, 00:15:28.048 "num_base_bdevs_discovered": 1, 00:15:28.048 "num_base_bdevs_operational": 3, 00:15:28.048 "base_bdevs_list": [ 00:15:28.048 { 00:15:28.048 "name": "BaseBdev1", 00:15:28.048 "uuid": "ee515b5b-58bf-47e2-b761-e0aeeb203fe5", 00:15:28.048 "is_configured": true, 00:15:28.048 "data_offset": 0, 00:15:28.048 "data_size": 65536 00:15:28.048 }, 00:15:28.048 { 00:15:28.048 "name": null, 00:15:28.048 "uuid": "5c43abb2-0807-48c4-aac4-e238e31ee0df", 00:15:28.048 "is_configured": false, 00:15:28.048 "data_offset": 0, 00:15:28.048 "data_size": 65536 00:15:28.048 }, 00:15:28.048 { 00:15:28.048 "name": null, 00:15:28.048 "uuid": "0f819164-7cf3-4e51-80ce-26f5b221dfb7", 00:15:28.048 "is_configured": false, 00:15:28.048 "data_offset": 0, 00:15:28.048 "data_size": 65536 00:15:28.048 } 00:15:28.048 ] 00:15:28.048 }' 00:15:28.049 13:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.049 13:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.330 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.330 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:28.330 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.330 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.588 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.588 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:28.588 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:28.588 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.588 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.589 [2024-12-06 13:10:15.392950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.589 "name": "Existed_Raid", 00:15:28.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.589 "strip_size_kb": 0, 00:15:28.589 "state": "configuring", 00:15:28.589 "raid_level": "raid1", 00:15:28.589 "superblock": false, 00:15:28.589 "num_base_bdevs": 3, 00:15:28.589 "num_base_bdevs_discovered": 2, 00:15:28.589 "num_base_bdevs_operational": 3, 00:15:28.589 "base_bdevs_list": [ 00:15:28.589 { 00:15:28.589 "name": "BaseBdev1", 00:15:28.589 "uuid": "ee515b5b-58bf-47e2-b761-e0aeeb203fe5", 00:15:28.589 "is_configured": true, 00:15:28.589 "data_offset": 0, 00:15:28.589 "data_size": 65536 00:15:28.589 }, 00:15:28.589 { 00:15:28.589 "name": null, 00:15:28.589 "uuid": "5c43abb2-0807-48c4-aac4-e238e31ee0df", 00:15:28.589 "is_configured": false, 00:15:28.589 "data_offset": 0, 00:15:28.589 "data_size": 65536 00:15:28.589 }, 00:15:28.589 { 00:15:28.589 "name": "BaseBdev3", 00:15:28.589 "uuid": "0f819164-7cf3-4e51-80ce-26f5b221dfb7", 00:15:28.589 "is_configured": true, 00:15:28.589 "data_offset": 0, 00:15:28.589 "data_size": 65536 00:15:28.589 } 00:15:28.589 ] 00:15:28.589 }' 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.589 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.156 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.156 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.156 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:29.156 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.156 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.156 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:29.156 13:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:29.156 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.156 13:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.156 [2024-12-06 13:10:15.969125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.156 "name": "Existed_Raid", 00:15:29.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.156 "strip_size_kb": 0, 00:15:29.156 "state": "configuring", 00:15:29.156 "raid_level": "raid1", 00:15:29.156 "superblock": false, 00:15:29.156 "num_base_bdevs": 3, 00:15:29.156 "num_base_bdevs_discovered": 1, 00:15:29.156 "num_base_bdevs_operational": 3, 00:15:29.156 "base_bdevs_list": [ 00:15:29.156 { 00:15:29.156 "name": null, 00:15:29.156 "uuid": "ee515b5b-58bf-47e2-b761-e0aeeb203fe5", 00:15:29.156 "is_configured": false, 00:15:29.156 "data_offset": 0, 00:15:29.156 "data_size": 65536 00:15:29.156 }, 00:15:29.156 { 00:15:29.156 "name": null, 00:15:29.156 "uuid": "5c43abb2-0807-48c4-aac4-e238e31ee0df", 00:15:29.156 "is_configured": false, 00:15:29.156 "data_offset": 0, 00:15:29.156 "data_size": 65536 00:15:29.156 }, 00:15:29.156 { 00:15:29.156 "name": "BaseBdev3", 00:15:29.156 "uuid": "0f819164-7cf3-4e51-80ce-26f5b221dfb7", 00:15:29.156 "is_configured": true, 00:15:29.156 "data_offset": 0, 00:15:29.156 "data_size": 65536 00:15:29.156 } 00:15:29.156 ] 00:15:29.156 }' 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.156 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.723 [2024-12-06 13:10:16.612857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.723 "name": "Existed_Raid", 00:15:29.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.723 "strip_size_kb": 0, 00:15:29.723 "state": "configuring", 00:15:29.723 "raid_level": "raid1", 00:15:29.723 "superblock": false, 00:15:29.723 "num_base_bdevs": 3, 00:15:29.723 "num_base_bdevs_discovered": 2, 00:15:29.723 "num_base_bdevs_operational": 3, 00:15:29.723 "base_bdevs_list": [ 00:15:29.723 { 00:15:29.723 "name": null, 00:15:29.723 "uuid": "ee515b5b-58bf-47e2-b761-e0aeeb203fe5", 00:15:29.723 "is_configured": false, 00:15:29.723 "data_offset": 0, 00:15:29.723 "data_size": 65536 00:15:29.723 }, 00:15:29.723 { 00:15:29.723 "name": "BaseBdev2", 00:15:29.723 "uuid": "5c43abb2-0807-48c4-aac4-e238e31ee0df", 00:15:29.723 "is_configured": true, 00:15:29.723 "data_offset": 0, 00:15:29.723 "data_size": 65536 00:15:29.723 }, 00:15:29.723 { 00:15:29.723 "name": "BaseBdev3", 00:15:29.723 "uuid": "0f819164-7cf3-4e51-80ce-26f5b221dfb7", 00:15:29.723 "is_configured": true, 00:15:29.723 "data_offset": 0, 00:15:29.723 "data_size": 65536 00:15:29.723 } 00:15:29.723 ] 00:15:29.723 }' 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.723 13:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ee515b5b-58bf-47e2-b761-e0aeeb203fe5 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.290 [2024-12-06 13:10:17.298882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:30.290 [2024-12-06 13:10:17.299111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:30.290 [2024-12-06 13:10:17.299134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:30.290 [2024-12-06 13:10:17.299489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:30.290 [2024-12-06 13:10:17.299680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:30.290 [2024-12-06 13:10:17.299702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:30.290 [2024-12-06 13:10:17.299990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.290 NewBaseBdev 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.290 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.549 [ 00:15:30.549 { 00:15:30.549 "name": "NewBaseBdev", 00:15:30.549 "aliases": [ 00:15:30.549 "ee515b5b-58bf-47e2-b761-e0aeeb203fe5" 00:15:30.549 ], 00:15:30.549 "product_name": "Malloc disk", 00:15:30.549 "block_size": 512, 00:15:30.549 "num_blocks": 65536, 00:15:30.549 "uuid": "ee515b5b-58bf-47e2-b761-e0aeeb203fe5", 00:15:30.549 "assigned_rate_limits": { 00:15:30.549 "rw_ios_per_sec": 0, 00:15:30.549 "rw_mbytes_per_sec": 0, 00:15:30.549 "r_mbytes_per_sec": 0, 00:15:30.549 "w_mbytes_per_sec": 0 00:15:30.549 }, 00:15:30.549 "claimed": true, 00:15:30.549 "claim_type": "exclusive_write", 00:15:30.549 "zoned": false, 00:15:30.549 "supported_io_types": { 00:15:30.549 "read": true, 00:15:30.549 "write": true, 00:15:30.549 "unmap": true, 00:15:30.549 "flush": true, 00:15:30.549 "reset": true, 00:15:30.549 "nvme_admin": false, 00:15:30.549 "nvme_io": false, 00:15:30.549 "nvme_io_md": false, 00:15:30.549 "write_zeroes": true, 00:15:30.549 "zcopy": true, 00:15:30.549 "get_zone_info": false, 00:15:30.549 "zone_management": false, 00:15:30.549 "zone_append": false, 00:15:30.549 "compare": false, 00:15:30.549 "compare_and_write": false, 00:15:30.549 "abort": true, 00:15:30.549 "seek_hole": false, 00:15:30.549 "seek_data": false, 00:15:30.549 "copy": true, 00:15:30.549 "nvme_iov_md": false 00:15:30.549 }, 00:15:30.549 "memory_domains": [ 00:15:30.549 { 00:15:30.549 "dma_device_id": "system", 00:15:30.549 "dma_device_type": 1 00:15:30.549 }, 00:15:30.549 { 00:15:30.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.549 "dma_device_type": 2 00:15:30.549 } 00:15:30.549 ], 00:15:30.549 "driver_specific": {} 00:15:30.549 } 00:15:30.549 ] 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.549 "name": "Existed_Raid", 00:15:30.549 "uuid": "dc2edf02-ab7c-41e2-a6e0-6131811f8795", 00:15:30.549 "strip_size_kb": 0, 00:15:30.549 "state": "online", 00:15:30.549 "raid_level": "raid1", 00:15:30.549 "superblock": false, 00:15:30.549 "num_base_bdevs": 3, 00:15:30.549 "num_base_bdevs_discovered": 3, 00:15:30.549 "num_base_bdevs_operational": 3, 00:15:30.549 "base_bdevs_list": [ 00:15:30.549 { 00:15:30.549 "name": "NewBaseBdev", 00:15:30.549 "uuid": "ee515b5b-58bf-47e2-b761-e0aeeb203fe5", 00:15:30.549 "is_configured": true, 00:15:30.549 "data_offset": 0, 00:15:30.549 "data_size": 65536 00:15:30.549 }, 00:15:30.549 { 00:15:30.549 "name": "BaseBdev2", 00:15:30.549 "uuid": "5c43abb2-0807-48c4-aac4-e238e31ee0df", 00:15:30.549 "is_configured": true, 00:15:30.549 "data_offset": 0, 00:15:30.549 "data_size": 65536 00:15:30.549 }, 00:15:30.549 { 00:15:30.549 "name": "BaseBdev3", 00:15:30.549 "uuid": "0f819164-7cf3-4e51-80ce-26f5b221dfb7", 00:15:30.549 "is_configured": true, 00:15:30.549 "data_offset": 0, 00:15:30.549 "data_size": 65536 00:15:30.549 } 00:15:30.549 ] 00:15:30.549 }' 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.549 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.116 [2024-12-06 13:10:17.839472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.116 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:31.116 "name": "Existed_Raid", 00:15:31.116 "aliases": [ 00:15:31.116 "dc2edf02-ab7c-41e2-a6e0-6131811f8795" 00:15:31.116 ], 00:15:31.116 "product_name": "Raid Volume", 00:15:31.116 "block_size": 512, 00:15:31.116 "num_blocks": 65536, 00:15:31.116 "uuid": "dc2edf02-ab7c-41e2-a6e0-6131811f8795", 00:15:31.116 "assigned_rate_limits": { 00:15:31.116 "rw_ios_per_sec": 0, 00:15:31.116 "rw_mbytes_per_sec": 0, 00:15:31.116 "r_mbytes_per_sec": 0, 00:15:31.116 "w_mbytes_per_sec": 0 00:15:31.116 }, 00:15:31.116 "claimed": false, 00:15:31.116 "zoned": false, 00:15:31.116 "supported_io_types": { 00:15:31.116 "read": true, 00:15:31.116 "write": true, 00:15:31.116 "unmap": false, 00:15:31.116 "flush": false, 00:15:31.116 "reset": true, 00:15:31.116 "nvme_admin": false, 00:15:31.116 "nvme_io": false, 00:15:31.116 "nvme_io_md": false, 00:15:31.116 "write_zeroes": true, 00:15:31.116 "zcopy": false, 00:15:31.116 "get_zone_info": false, 00:15:31.116 "zone_management": false, 00:15:31.116 "zone_append": false, 00:15:31.116 "compare": false, 00:15:31.116 "compare_and_write": false, 00:15:31.116 "abort": false, 00:15:31.116 "seek_hole": false, 00:15:31.116 "seek_data": false, 00:15:31.116 "copy": false, 00:15:31.116 "nvme_iov_md": false 00:15:31.116 }, 00:15:31.116 "memory_domains": [ 00:15:31.116 { 00:15:31.116 "dma_device_id": "system", 00:15:31.116 "dma_device_type": 1 00:15:31.116 }, 00:15:31.116 { 00:15:31.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.116 "dma_device_type": 2 00:15:31.116 }, 00:15:31.116 { 00:15:31.116 "dma_device_id": "system", 00:15:31.116 "dma_device_type": 1 00:15:31.116 }, 00:15:31.116 { 00:15:31.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.116 "dma_device_type": 2 00:15:31.116 }, 00:15:31.116 { 00:15:31.116 "dma_device_id": "system", 00:15:31.116 "dma_device_type": 1 00:15:31.116 }, 00:15:31.116 { 00:15:31.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.116 "dma_device_type": 2 00:15:31.116 } 00:15:31.116 ], 00:15:31.116 "driver_specific": { 00:15:31.116 "raid": { 00:15:31.116 "uuid": "dc2edf02-ab7c-41e2-a6e0-6131811f8795", 00:15:31.116 "strip_size_kb": 0, 00:15:31.116 "state": "online", 00:15:31.116 "raid_level": "raid1", 00:15:31.116 "superblock": false, 00:15:31.116 "num_base_bdevs": 3, 00:15:31.116 "num_base_bdevs_discovered": 3, 00:15:31.116 "num_base_bdevs_operational": 3, 00:15:31.116 "base_bdevs_list": [ 00:15:31.116 { 00:15:31.116 "name": "NewBaseBdev", 00:15:31.116 "uuid": "ee515b5b-58bf-47e2-b761-e0aeeb203fe5", 00:15:31.116 "is_configured": true, 00:15:31.116 "data_offset": 0, 00:15:31.116 "data_size": 65536 00:15:31.116 }, 00:15:31.116 { 00:15:31.116 "name": "BaseBdev2", 00:15:31.116 "uuid": "5c43abb2-0807-48c4-aac4-e238e31ee0df", 00:15:31.116 "is_configured": true, 00:15:31.116 "data_offset": 0, 00:15:31.116 "data_size": 65536 00:15:31.116 }, 00:15:31.116 { 00:15:31.116 "name": "BaseBdev3", 00:15:31.117 "uuid": "0f819164-7cf3-4e51-80ce-26f5b221dfb7", 00:15:31.117 "is_configured": true, 00:15:31.117 "data_offset": 0, 00:15:31.117 "data_size": 65536 00:15:31.117 } 00:15:31.117 ] 00:15:31.117 } 00:15:31.117 } 00:15:31.117 }' 00:15:31.117 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.117 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:31.117 BaseBdev2 00:15:31.117 BaseBdev3' 00:15:31.117 13:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.117 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.376 [2024-12-06 13:10:18.171173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.376 [2024-12-06 13:10:18.171212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.376 [2024-12-06 13:10:18.171301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.376 [2024-12-06 13:10:18.171705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.376 [2024-12-06 13:10:18.171724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67693 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67693 ']' 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67693 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67693 00:15:31.376 killing process with pid 67693 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67693' 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67693 00:15:31.376 [2024-12-06 13:10:18.205733] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.376 13:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67693 00:15:31.634 [2024-12-06 13:10:18.469922] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:32.582 00:15:32.582 real 0m11.724s 00:15:32.582 user 0m19.493s 00:15:32.582 sys 0m1.616s 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.582 ************************************ 00:15:32.582 END TEST raid_state_function_test 00:15:32.582 ************************************ 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.582 13:10:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:15:32.582 13:10:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:32.582 13:10:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.582 13:10:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.582 ************************************ 00:15:32.582 START TEST raid_state_function_test_sb 00:15:32.582 ************************************ 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:32.582 Process raid pid: 68327 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68327 00:15:32.582 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68327' 00:15:32.583 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68327 00:15:32.583 13:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:32.583 13:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68327 ']' 00:15:32.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.583 13:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.583 13:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.583 13:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.583 13:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.583 13:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.841 [2024-12-06 13:10:19.681196] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:15:32.841 [2024-12-06 13:10:19.681338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.099 [2024-12-06 13:10:19.856632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.099 [2024-12-06 13:10:19.989449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.356 [2024-12-06 13:10:20.199397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.356 [2024-12-06 13:10:20.199671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.922 [2024-12-06 13:10:20.711648] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.922 [2024-12-06 13:10:20.711719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.922 [2024-12-06 13:10:20.711737] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.922 [2024-12-06 13:10:20.711753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.922 [2024-12-06 13:10:20.711763] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.922 [2024-12-06 13:10:20.711777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.922 "name": "Existed_Raid", 00:15:33.922 "uuid": "187e5df3-7389-43f3-8677-186f432f58b3", 00:15:33.922 "strip_size_kb": 0, 00:15:33.922 "state": "configuring", 00:15:33.922 "raid_level": "raid1", 00:15:33.922 "superblock": true, 00:15:33.922 "num_base_bdevs": 3, 00:15:33.922 "num_base_bdevs_discovered": 0, 00:15:33.922 "num_base_bdevs_operational": 3, 00:15:33.922 "base_bdevs_list": [ 00:15:33.922 { 00:15:33.922 "name": "BaseBdev1", 00:15:33.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.922 "is_configured": false, 00:15:33.922 "data_offset": 0, 00:15:33.922 "data_size": 0 00:15:33.922 }, 00:15:33.922 { 00:15:33.922 "name": "BaseBdev2", 00:15:33.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.922 "is_configured": false, 00:15:33.922 "data_offset": 0, 00:15:33.922 "data_size": 0 00:15:33.922 }, 00:15:33.922 { 00:15:33.922 "name": "BaseBdev3", 00:15:33.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.922 "is_configured": false, 00:15:33.922 "data_offset": 0, 00:15:33.922 "data_size": 0 00:15:33.922 } 00:15:33.922 ] 00:15:33.922 }' 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.922 13:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.180 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.180 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.180 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.180 [2024-12-06 13:10:21.187717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.180 [2024-12-06 13:10:21.187889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:34.180 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.180 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:34.180 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.180 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.439 [2024-12-06 13:10:21.195705] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.439 [2024-12-06 13:10:21.195764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.439 [2024-12-06 13:10:21.195780] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.439 [2024-12-06 13:10:21.195796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.439 [2024-12-06 13:10:21.195805] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.439 [2024-12-06 13:10:21.195818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.439 [2024-12-06 13:10:21.240898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.439 BaseBdev1 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.439 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.439 [ 00:15:34.439 { 00:15:34.439 "name": "BaseBdev1", 00:15:34.439 "aliases": [ 00:15:34.440 "8e265785-a82f-4e12-92af-aea28ebc18a4" 00:15:34.440 ], 00:15:34.440 "product_name": "Malloc disk", 00:15:34.440 "block_size": 512, 00:15:34.440 "num_blocks": 65536, 00:15:34.440 "uuid": "8e265785-a82f-4e12-92af-aea28ebc18a4", 00:15:34.440 "assigned_rate_limits": { 00:15:34.440 "rw_ios_per_sec": 0, 00:15:34.440 "rw_mbytes_per_sec": 0, 00:15:34.440 "r_mbytes_per_sec": 0, 00:15:34.440 "w_mbytes_per_sec": 0 00:15:34.440 }, 00:15:34.440 "claimed": true, 00:15:34.440 "claim_type": "exclusive_write", 00:15:34.440 "zoned": false, 00:15:34.440 "supported_io_types": { 00:15:34.440 "read": true, 00:15:34.440 "write": true, 00:15:34.440 "unmap": true, 00:15:34.440 "flush": true, 00:15:34.440 "reset": true, 00:15:34.440 "nvme_admin": false, 00:15:34.440 "nvme_io": false, 00:15:34.440 "nvme_io_md": false, 00:15:34.440 "write_zeroes": true, 00:15:34.440 "zcopy": true, 00:15:34.440 "get_zone_info": false, 00:15:34.440 "zone_management": false, 00:15:34.440 "zone_append": false, 00:15:34.440 "compare": false, 00:15:34.440 "compare_and_write": false, 00:15:34.440 "abort": true, 00:15:34.440 "seek_hole": false, 00:15:34.440 "seek_data": false, 00:15:34.440 "copy": true, 00:15:34.440 "nvme_iov_md": false 00:15:34.440 }, 00:15:34.440 "memory_domains": [ 00:15:34.440 { 00:15:34.440 "dma_device_id": "system", 00:15:34.440 "dma_device_type": 1 00:15:34.440 }, 00:15:34.440 { 00:15:34.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.440 "dma_device_type": 2 00:15:34.440 } 00:15:34.440 ], 00:15:34.440 "driver_specific": {} 00:15:34.440 } 00:15:34.440 ] 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.440 "name": "Existed_Raid", 00:15:34.440 "uuid": "718b310d-7a37-4c14-95c4-3506ea1f29c4", 00:15:34.440 "strip_size_kb": 0, 00:15:34.440 "state": "configuring", 00:15:34.440 "raid_level": "raid1", 00:15:34.440 "superblock": true, 00:15:34.440 "num_base_bdevs": 3, 00:15:34.440 "num_base_bdevs_discovered": 1, 00:15:34.440 "num_base_bdevs_operational": 3, 00:15:34.440 "base_bdevs_list": [ 00:15:34.440 { 00:15:34.440 "name": "BaseBdev1", 00:15:34.440 "uuid": "8e265785-a82f-4e12-92af-aea28ebc18a4", 00:15:34.440 "is_configured": true, 00:15:34.440 "data_offset": 2048, 00:15:34.440 "data_size": 63488 00:15:34.440 }, 00:15:34.440 { 00:15:34.440 "name": "BaseBdev2", 00:15:34.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.440 "is_configured": false, 00:15:34.440 "data_offset": 0, 00:15:34.440 "data_size": 0 00:15:34.440 }, 00:15:34.440 { 00:15:34.440 "name": "BaseBdev3", 00:15:34.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.440 "is_configured": false, 00:15:34.440 "data_offset": 0, 00:15:34.440 "data_size": 0 00:15:34.440 } 00:15:34.440 ] 00:15:34.440 }' 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.440 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.034 [2024-12-06 13:10:21.825128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.034 [2024-12-06 13:10:21.825196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.034 [2024-12-06 13:10:21.833172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.034 [2024-12-06 13:10:21.835743] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.034 [2024-12-06 13:10:21.835796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.034 [2024-12-06 13:10:21.835812] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.034 [2024-12-06 13:10:21.835828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.034 "name": "Existed_Raid", 00:15:35.034 "uuid": "573227da-1342-4396-9054-d492250a7777", 00:15:35.034 "strip_size_kb": 0, 00:15:35.034 "state": "configuring", 00:15:35.034 "raid_level": "raid1", 00:15:35.034 "superblock": true, 00:15:35.034 "num_base_bdevs": 3, 00:15:35.034 "num_base_bdevs_discovered": 1, 00:15:35.034 "num_base_bdevs_operational": 3, 00:15:35.034 "base_bdevs_list": [ 00:15:35.034 { 00:15:35.034 "name": "BaseBdev1", 00:15:35.034 "uuid": "8e265785-a82f-4e12-92af-aea28ebc18a4", 00:15:35.034 "is_configured": true, 00:15:35.034 "data_offset": 2048, 00:15:35.034 "data_size": 63488 00:15:35.034 }, 00:15:35.034 { 00:15:35.034 "name": "BaseBdev2", 00:15:35.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.034 "is_configured": false, 00:15:35.034 "data_offset": 0, 00:15:35.034 "data_size": 0 00:15:35.034 }, 00:15:35.034 { 00:15:35.034 "name": "BaseBdev3", 00:15:35.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.034 "is_configured": false, 00:15:35.034 "data_offset": 0, 00:15:35.034 "data_size": 0 00:15:35.034 } 00:15:35.034 ] 00:15:35.034 }' 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.034 13:10:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.600 [2024-12-06 13:10:22.404323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.600 BaseBdev2 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.600 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.600 [ 00:15:35.600 { 00:15:35.600 "name": "BaseBdev2", 00:15:35.600 "aliases": [ 00:15:35.600 "2f4dd562-a28a-4055-953f-26892837bf91" 00:15:35.600 ], 00:15:35.600 "product_name": "Malloc disk", 00:15:35.600 "block_size": 512, 00:15:35.600 "num_blocks": 65536, 00:15:35.600 "uuid": "2f4dd562-a28a-4055-953f-26892837bf91", 00:15:35.600 "assigned_rate_limits": { 00:15:35.600 "rw_ios_per_sec": 0, 00:15:35.600 "rw_mbytes_per_sec": 0, 00:15:35.600 "r_mbytes_per_sec": 0, 00:15:35.600 "w_mbytes_per_sec": 0 00:15:35.600 }, 00:15:35.600 "claimed": true, 00:15:35.600 "claim_type": "exclusive_write", 00:15:35.600 "zoned": false, 00:15:35.600 "supported_io_types": { 00:15:35.600 "read": true, 00:15:35.600 "write": true, 00:15:35.601 "unmap": true, 00:15:35.601 "flush": true, 00:15:35.601 "reset": true, 00:15:35.601 "nvme_admin": false, 00:15:35.601 "nvme_io": false, 00:15:35.601 "nvme_io_md": false, 00:15:35.601 "write_zeroes": true, 00:15:35.601 "zcopy": true, 00:15:35.601 "get_zone_info": false, 00:15:35.601 "zone_management": false, 00:15:35.601 "zone_append": false, 00:15:35.601 "compare": false, 00:15:35.601 "compare_and_write": false, 00:15:35.601 "abort": true, 00:15:35.601 "seek_hole": false, 00:15:35.601 "seek_data": false, 00:15:35.601 "copy": true, 00:15:35.601 "nvme_iov_md": false 00:15:35.601 }, 00:15:35.601 "memory_domains": [ 00:15:35.601 { 00:15:35.601 "dma_device_id": "system", 00:15:35.601 "dma_device_type": 1 00:15:35.601 }, 00:15:35.601 { 00:15:35.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.601 "dma_device_type": 2 00:15:35.601 } 00:15:35.601 ], 00:15:35.601 "driver_specific": {} 00:15:35.601 } 00:15:35.601 ] 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.601 "name": "Existed_Raid", 00:15:35.601 "uuid": "573227da-1342-4396-9054-d492250a7777", 00:15:35.601 "strip_size_kb": 0, 00:15:35.601 "state": "configuring", 00:15:35.601 "raid_level": "raid1", 00:15:35.601 "superblock": true, 00:15:35.601 "num_base_bdevs": 3, 00:15:35.601 "num_base_bdevs_discovered": 2, 00:15:35.601 "num_base_bdevs_operational": 3, 00:15:35.601 "base_bdevs_list": [ 00:15:35.601 { 00:15:35.601 "name": "BaseBdev1", 00:15:35.601 "uuid": "8e265785-a82f-4e12-92af-aea28ebc18a4", 00:15:35.601 "is_configured": true, 00:15:35.601 "data_offset": 2048, 00:15:35.601 "data_size": 63488 00:15:35.601 }, 00:15:35.601 { 00:15:35.601 "name": "BaseBdev2", 00:15:35.601 "uuid": "2f4dd562-a28a-4055-953f-26892837bf91", 00:15:35.601 "is_configured": true, 00:15:35.601 "data_offset": 2048, 00:15:35.601 "data_size": 63488 00:15:35.601 }, 00:15:35.601 { 00:15:35.601 "name": "BaseBdev3", 00:15:35.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.601 "is_configured": false, 00:15:35.601 "data_offset": 0, 00:15:35.601 "data_size": 0 00:15:35.601 } 00:15:35.601 ] 00:15:35.601 }' 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.601 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.168 13:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:36.168 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.168 13:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.168 [2024-12-06 13:10:23.009382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.168 [2024-12-06 13:10:23.009741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:36.168 [2024-12-06 13:10:23.009770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:36.168 BaseBdev3 00:15:36.168 [2024-12-06 13:10:23.010102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:36.168 [2024-12-06 13:10:23.010311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:36.168 [2024-12-06 13:10:23.010343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:36.168 [2024-12-06 13:10:23.010550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.168 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.168 [ 00:15:36.168 { 00:15:36.168 "name": "BaseBdev3", 00:15:36.168 "aliases": [ 00:15:36.168 "5a753107-3fab-4f29-acec-fae4cb7e14ce" 00:15:36.168 ], 00:15:36.168 "product_name": "Malloc disk", 00:15:36.168 "block_size": 512, 00:15:36.168 "num_blocks": 65536, 00:15:36.168 "uuid": "5a753107-3fab-4f29-acec-fae4cb7e14ce", 00:15:36.168 "assigned_rate_limits": { 00:15:36.168 "rw_ios_per_sec": 0, 00:15:36.168 "rw_mbytes_per_sec": 0, 00:15:36.168 "r_mbytes_per_sec": 0, 00:15:36.168 "w_mbytes_per_sec": 0 00:15:36.168 }, 00:15:36.168 "claimed": true, 00:15:36.168 "claim_type": "exclusive_write", 00:15:36.168 "zoned": false, 00:15:36.168 "supported_io_types": { 00:15:36.168 "read": true, 00:15:36.168 "write": true, 00:15:36.168 "unmap": true, 00:15:36.168 "flush": true, 00:15:36.168 "reset": true, 00:15:36.168 "nvme_admin": false, 00:15:36.168 "nvme_io": false, 00:15:36.168 "nvme_io_md": false, 00:15:36.168 "write_zeroes": true, 00:15:36.168 "zcopy": true, 00:15:36.168 "get_zone_info": false, 00:15:36.168 "zone_management": false, 00:15:36.168 "zone_append": false, 00:15:36.168 "compare": false, 00:15:36.168 "compare_and_write": false, 00:15:36.168 "abort": true, 00:15:36.168 "seek_hole": false, 00:15:36.168 "seek_data": false, 00:15:36.168 "copy": true, 00:15:36.168 "nvme_iov_md": false 00:15:36.168 }, 00:15:36.168 "memory_domains": [ 00:15:36.169 { 00:15:36.169 "dma_device_id": "system", 00:15:36.169 "dma_device_type": 1 00:15:36.169 }, 00:15:36.169 { 00:15:36.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.169 "dma_device_type": 2 00:15:36.169 } 00:15:36.169 ], 00:15:36.169 "driver_specific": {} 00:15:36.169 } 00:15:36.169 ] 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.169 "name": "Existed_Raid", 00:15:36.169 "uuid": "573227da-1342-4396-9054-d492250a7777", 00:15:36.169 "strip_size_kb": 0, 00:15:36.169 "state": "online", 00:15:36.169 "raid_level": "raid1", 00:15:36.169 "superblock": true, 00:15:36.169 "num_base_bdevs": 3, 00:15:36.169 "num_base_bdevs_discovered": 3, 00:15:36.169 "num_base_bdevs_operational": 3, 00:15:36.169 "base_bdevs_list": [ 00:15:36.169 { 00:15:36.169 "name": "BaseBdev1", 00:15:36.169 "uuid": "8e265785-a82f-4e12-92af-aea28ebc18a4", 00:15:36.169 "is_configured": true, 00:15:36.169 "data_offset": 2048, 00:15:36.169 "data_size": 63488 00:15:36.169 }, 00:15:36.169 { 00:15:36.169 "name": "BaseBdev2", 00:15:36.169 "uuid": "2f4dd562-a28a-4055-953f-26892837bf91", 00:15:36.169 "is_configured": true, 00:15:36.169 "data_offset": 2048, 00:15:36.169 "data_size": 63488 00:15:36.169 }, 00:15:36.169 { 00:15:36.169 "name": "BaseBdev3", 00:15:36.169 "uuid": "5a753107-3fab-4f29-acec-fae4cb7e14ce", 00:15:36.169 "is_configured": true, 00:15:36.169 "data_offset": 2048, 00:15:36.169 "data_size": 63488 00:15:36.169 } 00:15:36.169 ] 00:15:36.169 }' 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.169 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.736 [2024-12-06 13:10:23.562018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.736 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.736 "name": "Existed_Raid", 00:15:36.736 "aliases": [ 00:15:36.736 "573227da-1342-4396-9054-d492250a7777" 00:15:36.736 ], 00:15:36.736 "product_name": "Raid Volume", 00:15:36.736 "block_size": 512, 00:15:36.736 "num_blocks": 63488, 00:15:36.736 "uuid": "573227da-1342-4396-9054-d492250a7777", 00:15:36.737 "assigned_rate_limits": { 00:15:36.737 "rw_ios_per_sec": 0, 00:15:36.737 "rw_mbytes_per_sec": 0, 00:15:36.737 "r_mbytes_per_sec": 0, 00:15:36.737 "w_mbytes_per_sec": 0 00:15:36.737 }, 00:15:36.737 "claimed": false, 00:15:36.737 "zoned": false, 00:15:36.737 "supported_io_types": { 00:15:36.737 "read": true, 00:15:36.737 "write": true, 00:15:36.737 "unmap": false, 00:15:36.737 "flush": false, 00:15:36.737 "reset": true, 00:15:36.737 "nvme_admin": false, 00:15:36.737 "nvme_io": false, 00:15:36.737 "nvme_io_md": false, 00:15:36.737 "write_zeroes": true, 00:15:36.737 "zcopy": false, 00:15:36.737 "get_zone_info": false, 00:15:36.737 "zone_management": false, 00:15:36.737 "zone_append": false, 00:15:36.737 "compare": false, 00:15:36.737 "compare_and_write": false, 00:15:36.737 "abort": false, 00:15:36.737 "seek_hole": false, 00:15:36.737 "seek_data": false, 00:15:36.737 "copy": false, 00:15:36.737 "nvme_iov_md": false 00:15:36.737 }, 00:15:36.737 "memory_domains": [ 00:15:36.737 { 00:15:36.737 "dma_device_id": "system", 00:15:36.737 "dma_device_type": 1 00:15:36.737 }, 00:15:36.737 { 00:15:36.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.737 "dma_device_type": 2 00:15:36.737 }, 00:15:36.737 { 00:15:36.737 "dma_device_id": "system", 00:15:36.737 "dma_device_type": 1 00:15:36.737 }, 00:15:36.737 { 00:15:36.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.737 "dma_device_type": 2 00:15:36.737 }, 00:15:36.737 { 00:15:36.737 "dma_device_id": "system", 00:15:36.737 "dma_device_type": 1 00:15:36.737 }, 00:15:36.737 { 00:15:36.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.737 "dma_device_type": 2 00:15:36.737 } 00:15:36.737 ], 00:15:36.737 "driver_specific": { 00:15:36.737 "raid": { 00:15:36.737 "uuid": "573227da-1342-4396-9054-d492250a7777", 00:15:36.737 "strip_size_kb": 0, 00:15:36.737 "state": "online", 00:15:36.737 "raid_level": "raid1", 00:15:36.737 "superblock": true, 00:15:36.737 "num_base_bdevs": 3, 00:15:36.737 "num_base_bdevs_discovered": 3, 00:15:36.737 "num_base_bdevs_operational": 3, 00:15:36.737 "base_bdevs_list": [ 00:15:36.737 { 00:15:36.737 "name": "BaseBdev1", 00:15:36.737 "uuid": "8e265785-a82f-4e12-92af-aea28ebc18a4", 00:15:36.737 "is_configured": true, 00:15:36.737 "data_offset": 2048, 00:15:36.737 "data_size": 63488 00:15:36.737 }, 00:15:36.737 { 00:15:36.737 "name": "BaseBdev2", 00:15:36.737 "uuid": "2f4dd562-a28a-4055-953f-26892837bf91", 00:15:36.737 "is_configured": true, 00:15:36.737 "data_offset": 2048, 00:15:36.737 "data_size": 63488 00:15:36.737 }, 00:15:36.737 { 00:15:36.737 "name": "BaseBdev3", 00:15:36.737 "uuid": "5a753107-3fab-4f29-acec-fae4cb7e14ce", 00:15:36.737 "is_configured": true, 00:15:36.737 "data_offset": 2048, 00:15:36.737 "data_size": 63488 00:15:36.737 } 00:15:36.737 ] 00:15:36.737 } 00:15:36.737 } 00:15:36.737 }' 00:15:36.737 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.737 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:36.737 BaseBdev2 00:15:36.737 BaseBdev3' 00:15:36.737 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.737 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.737 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.737 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:36.737 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.737 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.737 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.737 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.997 [2024-12-06 13:10:23.853719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.997 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.997 "name": "Existed_Raid", 00:15:36.997 "uuid": "573227da-1342-4396-9054-d492250a7777", 00:15:36.997 "strip_size_kb": 0, 00:15:36.997 "state": "online", 00:15:36.997 "raid_level": "raid1", 00:15:36.997 "superblock": true, 00:15:36.997 "num_base_bdevs": 3, 00:15:36.997 "num_base_bdevs_discovered": 2, 00:15:36.997 "num_base_bdevs_operational": 2, 00:15:36.997 "base_bdevs_list": [ 00:15:36.997 { 00:15:36.997 "name": null, 00:15:36.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.997 "is_configured": false, 00:15:36.997 "data_offset": 0, 00:15:36.997 "data_size": 63488 00:15:36.997 }, 00:15:36.997 { 00:15:36.997 "name": "BaseBdev2", 00:15:36.997 "uuid": "2f4dd562-a28a-4055-953f-26892837bf91", 00:15:36.997 "is_configured": true, 00:15:36.997 "data_offset": 2048, 00:15:36.997 "data_size": 63488 00:15:36.997 }, 00:15:36.997 { 00:15:36.997 "name": "BaseBdev3", 00:15:36.997 "uuid": "5a753107-3fab-4f29-acec-fae4cb7e14ce", 00:15:36.997 "is_configured": true, 00:15:36.997 "data_offset": 2048, 00:15:36.997 "data_size": 63488 00:15:36.998 } 00:15:36.998 ] 00:15:36.998 }' 00:15:36.998 13:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.998 13:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.565 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.565 [2024-12-06 13:10:24.521338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.824 [2024-12-06 13:10:24.666789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.824 [2024-12-06 13:10:24.666938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.824 [2024-12-06 13:10:24.752428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.824 [2024-12-06 13:10:24.752526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.824 [2024-12-06 13:10:24.752548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.824 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.084 BaseBdev2 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.084 [ 00:15:38.084 { 00:15:38.084 "name": "BaseBdev2", 00:15:38.084 "aliases": [ 00:15:38.084 "7aa70e1a-7fd8-4e89-a92c-b13ffc78b45a" 00:15:38.084 ], 00:15:38.084 "product_name": "Malloc disk", 00:15:38.084 "block_size": 512, 00:15:38.084 "num_blocks": 65536, 00:15:38.084 "uuid": "7aa70e1a-7fd8-4e89-a92c-b13ffc78b45a", 00:15:38.084 "assigned_rate_limits": { 00:15:38.084 "rw_ios_per_sec": 0, 00:15:38.084 "rw_mbytes_per_sec": 0, 00:15:38.084 "r_mbytes_per_sec": 0, 00:15:38.084 "w_mbytes_per_sec": 0 00:15:38.084 }, 00:15:38.084 "claimed": false, 00:15:38.084 "zoned": false, 00:15:38.084 "supported_io_types": { 00:15:38.084 "read": true, 00:15:38.084 "write": true, 00:15:38.084 "unmap": true, 00:15:38.084 "flush": true, 00:15:38.084 "reset": true, 00:15:38.084 "nvme_admin": false, 00:15:38.084 "nvme_io": false, 00:15:38.084 "nvme_io_md": false, 00:15:38.084 "write_zeroes": true, 00:15:38.084 "zcopy": true, 00:15:38.084 "get_zone_info": false, 00:15:38.084 "zone_management": false, 00:15:38.084 "zone_append": false, 00:15:38.084 "compare": false, 00:15:38.084 "compare_and_write": false, 00:15:38.084 "abort": true, 00:15:38.084 "seek_hole": false, 00:15:38.084 "seek_data": false, 00:15:38.084 "copy": true, 00:15:38.084 "nvme_iov_md": false 00:15:38.084 }, 00:15:38.084 "memory_domains": [ 00:15:38.084 { 00:15:38.084 "dma_device_id": "system", 00:15:38.084 "dma_device_type": 1 00:15:38.084 }, 00:15:38.084 { 00:15:38.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.084 "dma_device_type": 2 00:15:38.084 } 00:15:38.084 ], 00:15:38.084 "driver_specific": {} 00:15:38.084 } 00:15:38.084 ] 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.084 BaseBdev3 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.084 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.084 [ 00:15:38.084 { 00:15:38.084 "name": "BaseBdev3", 00:15:38.084 "aliases": [ 00:15:38.084 "0da50a57-4915-4c98-9146-4fa674c3ea1e" 00:15:38.084 ], 00:15:38.084 "product_name": "Malloc disk", 00:15:38.084 "block_size": 512, 00:15:38.084 "num_blocks": 65536, 00:15:38.085 "uuid": "0da50a57-4915-4c98-9146-4fa674c3ea1e", 00:15:38.085 "assigned_rate_limits": { 00:15:38.085 "rw_ios_per_sec": 0, 00:15:38.085 "rw_mbytes_per_sec": 0, 00:15:38.085 "r_mbytes_per_sec": 0, 00:15:38.085 "w_mbytes_per_sec": 0 00:15:38.085 }, 00:15:38.085 "claimed": false, 00:15:38.085 "zoned": false, 00:15:38.085 "supported_io_types": { 00:15:38.085 "read": true, 00:15:38.085 "write": true, 00:15:38.085 "unmap": true, 00:15:38.085 "flush": true, 00:15:38.085 "reset": true, 00:15:38.085 "nvme_admin": false, 00:15:38.085 "nvme_io": false, 00:15:38.085 "nvme_io_md": false, 00:15:38.085 "write_zeroes": true, 00:15:38.085 "zcopy": true, 00:15:38.085 "get_zone_info": false, 00:15:38.085 "zone_management": false, 00:15:38.085 "zone_append": false, 00:15:38.085 "compare": false, 00:15:38.085 "compare_and_write": false, 00:15:38.085 "abort": true, 00:15:38.085 "seek_hole": false, 00:15:38.085 "seek_data": false, 00:15:38.085 "copy": true, 00:15:38.085 "nvme_iov_md": false 00:15:38.085 }, 00:15:38.085 "memory_domains": [ 00:15:38.085 { 00:15:38.085 "dma_device_id": "system", 00:15:38.085 "dma_device_type": 1 00:15:38.085 }, 00:15:38.085 { 00:15:38.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.085 "dma_device_type": 2 00:15:38.085 } 00:15:38.085 ], 00:15:38.085 "driver_specific": {} 00:15:38.085 } 00:15:38.085 ] 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.085 [2024-12-06 13:10:24.976817] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.085 [2024-12-06 13:10:24.977046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.085 [2024-12-06 13:10:24.977173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.085 [2024-12-06 13:10:24.979663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.085 13:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.085 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.085 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.085 "name": "Existed_Raid", 00:15:38.085 "uuid": "5eca4590-b351-41f4-92db-85a1637051b7", 00:15:38.085 "strip_size_kb": 0, 00:15:38.085 "state": "configuring", 00:15:38.085 "raid_level": "raid1", 00:15:38.085 "superblock": true, 00:15:38.085 "num_base_bdevs": 3, 00:15:38.085 "num_base_bdevs_discovered": 2, 00:15:38.085 "num_base_bdevs_operational": 3, 00:15:38.085 "base_bdevs_list": [ 00:15:38.085 { 00:15:38.085 "name": "BaseBdev1", 00:15:38.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.085 "is_configured": false, 00:15:38.085 "data_offset": 0, 00:15:38.085 "data_size": 0 00:15:38.085 }, 00:15:38.085 { 00:15:38.085 "name": "BaseBdev2", 00:15:38.085 "uuid": "7aa70e1a-7fd8-4e89-a92c-b13ffc78b45a", 00:15:38.085 "is_configured": true, 00:15:38.085 "data_offset": 2048, 00:15:38.085 "data_size": 63488 00:15:38.085 }, 00:15:38.085 { 00:15:38.085 "name": "BaseBdev3", 00:15:38.085 "uuid": "0da50a57-4915-4c98-9146-4fa674c3ea1e", 00:15:38.085 "is_configured": true, 00:15:38.085 "data_offset": 2048, 00:15:38.085 "data_size": 63488 00:15:38.085 } 00:15:38.085 ] 00:15:38.085 }' 00:15:38.085 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.085 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.651 [2024-12-06 13:10:25.504991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.651 "name": "Existed_Raid", 00:15:38.651 "uuid": "5eca4590-b351-41f4-92db-85a1637051b7", 00:15:38.651 "strip_size_kb": 0, 00:15:38.651 "state": "configuring", 00:15:38.651 "raid_level": "raid1", 00:15:38.651 "superblock": true, 00:15:38.651 "num_base_bdevs": 3, 00:15:38.651 "num_base_bdevs_discovered": 1, 00:15:38.651 "num_base_bdevs_operational": 3, 00:15:38.651 "base_bdevs_list": [ 00:15:38.651 { 00:15:38.651 "name": "BaseBdev1", 00:15:38.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.651 "is_configured": false, 00:15:38.651 "data_offset": 0, 00:15:38.651 "data_size": 0 00:15:38.651 }, 00:15:38.651 { 00:15:38.651 "name": null, 00:15:38.651 "uuid": "7aa70e1a-7fd8-4e89-a92c-b13ffc78b45a", 00:15:38.651 "is_configured": false, 00:15:38.651 "data_offset": 0, 00:15:38.651 "data_size": 63488 00:15:38.651 }, 00:15:38.651 { 00:15:38.651 "name": "BaseBdev3", 00:15:38.651 "uuid": "0da50a57-4915-4c98-9146-4fa674c3ea1e", 00:15:38.651 "is_configured": true, 00:15:38.651 "data_offset": 2048, 00:15:38.651 "data_size": 63488 00:15:38.651 } 00:15:38.651 ] 00:15:38.651 }' 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.651 13:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.217 [2024-12-06 13:10:26.122572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.217 BaseBdev1 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.217 [ 00:15:39.217 { 00:15:39.217 "name": "BaseBdev1", 00:15:39.217 "aliases": [ 00:15:39.217 "9869c942-3987-456d-b65b-a89454b49005" 00:15:39.217 ], 00:15:39.217 "product_name": "Malloc disk", 00:15:39.217 "block_size": 512, 00:15:39.217 "num_blocks": 65536, 00:15:39.217 "uuid": "9869c942-3987-456d-b65b-a89454b49005", 00:15:39.217 "assigned_rate_limits": { 00:15:39.217 "rw_ios_per_sec": 0, 00:15:39.217 "rw_mbytes_per_sec": 0, 00:15:39.217 "r_mbytes_per_sec": 0, 00:15:39.217 "w_mbytes_per_sec": 0 00:15:39.217 }, 00:15:39.217 "claimed": true, 00:15:39.217 "claim_type": "exclusive_write", 00:15:39.217 "zoned": false, 00:15:39.217 "supported_io_types": { 00:15:39.217 "read": true, 00:15:39.217 "write": true, 00:15:39.217 "unmap": true, 00:15:39.217 "flush": true, 00:15:39.217 "reset": true, 00:15:39.217 "nvme_admin": false, 00:15:39.217 "nvme_io": false, 00:15:39.217 "nvme_io_md": false, 00:15:39.217 "write_zeroes": true, 00:15:39.217 "zcopy": true, 00:15:39.217 "get_zone_info": false, 00:15:39.217 "zone_management": false, 00:15:39.217 "zone_append": false, 00:15:39.217 "compare": false, 00:15:39.217 "compare_and_write": false, 00:15:39.217 "abort": true, 00:15:39.217 "seek_hole": false, 00:15:39.217 "seek_data": false, 00:15:39.217 "copy": true, 00:15:39.217 "nvme_iov_md": false 00:15:39.217 }, 00:15:39.217 "memory_domains": [ 00:15:39.217 { 00:15:39.217 "dma_device_id": "system", 00:15:39.217 "dma_device_type": 1 00:15:39.217 }, 00:15:39.217 { 00:15:39.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.217 "dma_device_type": 2 00:15:39.217 } 00:15:39.217 ], 00:15:39.217 "driver_specific": {} 00:15:39.217 } 00:15:39.217 ] 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.217 "name": "Existed_Raid", 00:15:39.217 "uuid": "5eca4590-b351-41f4-92db-85a1637051b7", 00:15:39.217 "strip_size_kb": 0, 00:15:39.217 "state": "configuring", 00:15:39.217 "raid_level": "raid1", 00:15:39.217 "superblock": true, 00:15:39.217 "num_base_bdevs": 3, 00:15:39.217 "num_base_bdevs_discovered": 2, 00:15:39.217 "num_base_bdevs_operational": 3, 00:15:39.217 "base_bdevs_list": [ 00:15:39.217 { 00:15:39.217 "name": "BaseBdev1", 00:15:39.217 "uuid": "9869c942-3987-456d-b65b-a89454b49005", 00:15:39.217 "is_configured": true, 00:15:39.217 "data_offset": 2048, 00:15:39.217 "data_size": 63488 00:15:39.217 }, 00:15:39.217 { 00:15:39.217 "name": null, 00:15:39.217 "uuid": "7aa70e1a-7fd8-4e89-a92c-b13ffc78b45a", 00:15:39.217 "is_configured": false, 00:15:39.217 "data_offset": 0, 00:15:39.217 "data_size": 63488 00:15:39.217 }, 00:15:39.217 { 00:15:39.217 "name": "BaseBdev3", 00:15:39.217 "uuid": "0da50a57-4915-4c98-9146-4fa674c3ea1e", 00:15:39.217 "is_configured": true, 00:15:39.217 "data_offset": 2048, 00:15:39.217 "data_size": 63488 00:15:39.217 } 00:15:39.217 ] 00:15:39.217 }' 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.217 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.798 [2024-12-06 13:10:26.778841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.798 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.063 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.063 "name": "Existed_Raid", 00:15:40.063 "uuid": "5eca4590-b351-41f4-92db-85a1637051b7", 00:15:40.063 "strip_size_kb": 0, 00:15:40.063 "state": "configuring", 00:15:40.063 "raid_level": "raid1", 00:15:40.063 "superblock": true, 00:15:40.063 "num_base_bdevs": 3, 00:15:40.063 "num_base_bdevs_discovered": 1, 00:15:40.063 "num_base_bdevs_operational": 3, 00:15:40.063 "base_bdevs_list": [ 00:15:40.063 { 00:15:40.063 "name": "BaseBdev1", 00:15:40.063 "uuid": "9869c942-3987-456d-b65b-a89454b49005", 00:15:40.063 "is_configured": true, 00:15:40.063 "data_offset": 2048, 00:15:40.063 "data_size": 63488 00:15:40.063 }, 00:15:40.063 { 00:15:40.063 "name": null, 00:15:40.063 "uuid": "7aa70e1a-7fd8-4e89-a92c-b13ffc78b45a", 00:15:40.063 "is_configured": false, 00:15:40.063 "data_offset": 0, 00:15:40.063 "data_size": 63488 00:15:40.063 }, 00:15:40.063 { 00:15:40.063 "name": null, 00:15:40.063 "uuid": "0da50a57-4915-4c98-9146-4fa674c3ea1e", 00:15:40.063 "is_configured": false, 00:15:40.063 "data_offset": 0, 00:15:40.063 "data_size": 63488 00:15:40.063 } 00:15:40.063 ] 00:15:40.063 }' 00:15:40.063 13:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.063 13:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.373 [2024-12-06 13:10:27.331003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.373 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.632 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.632 "name": "Existed_Raid", 00:15:40.632 "uuid": "5eca4590-b351-41f4-92db-85a1637051b7", 00:15:40.632 "strip_size_kb": 0, 00:15:40.632 "state": "configuring", 00:15:40.632 "raid_level": "raid1", 00:15:40.632 "superblock": true, 00:15:40.632 "num_base_bdevs": 3, 00:15:40.632 "num_base_bdevs_discovered": 2, 00:15:40.632 "num_base_bdevs_operational": 3, 00:15:40.632 "base_bdevs_list": [ 00:15:40.632 { 00:15:40.632 "name": "BaseBdev1", 00:15:40.632 "uuid": "9869c942-3987-456d-b65b-a89454b49005", 00:15:40.632 "is_configured": true, 00:15:40.632 "data_offset": 2048, 00:15:40.632 "data_size": 63488 00:15:40.632 }, 00:15:40.632 { 00:15:40.632 "name": null, 00:15:40.632 "uuid": "7aa70e1a-7fd8-4e89-a92c-b13ffc78b45a", 00:15:40.632 "is_configured": false, 00:15:40.632 "data_offset": 0, 00:15:40.632 "data_size": 63488 00:15:40.632 }, 00:15:40.632 { 00:15:40.632 "name": "BaseBdev3", 00:15:40.632 "uuid": "0da50a57-4915-4c98-9146-4fa674c3ea1e", 00:15:40.632 "is_configured": true, 00:15:40.632 "data_offset": 2048, 00:15:40.632 "data_size": 63488 00:15:40.632 } 00:15:40.632 ] 00:15:40.632 }' 00:15:40.632 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.632 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.891 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.891 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.891 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.891 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.891 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.891 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:40.891 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.891 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.891 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.150 [2024-12-06 13:10:27.907198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.150 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.150 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:41.150 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.150 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.150 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.151 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.151 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.151 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.151 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.151 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.151 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.151 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.151 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.151 13:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.151 13:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.151 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.151 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.151 "name": "Existed_Raid", 00:15:41.151 "uuid": "5eca4590-b351-41f4-92db-85a1637051b7", 00:15:41.151 "strip_size_kb": 0, 00:15:41.151 "state": "configuring", 00:15:41.151 "raid_level": "raid1", 00:15:41.151 "superblock": true, 00:15:41.151 "num_base_bdevs": 3, 00:15:41.151 "num_base_bdevs_discovered": 1, 00:15:41.151 "num_base_bdevs_operational": 3, 00:15:41.151 "base_bdevs_list": [ 00:15:41.151 { 00:15:41.151 "name": null, 00:15:41.151 "uuid": "9869c942-3987-456d-b65b-a89454b49005", 00:15:41.151 "is_configured": false, 00:15:41.151 "data_offset": 0, 00:15:41.151 "data_size": 63488 00:15:41.151 }, 00:15:41.151 { 00:15:41.151 "name": null, 00:15:41.151 "uuid": "7aa70e1a-7fd8-4e89-a92c-b13ffc78b45a", 00:15:41.151 "is_configured": false, 00:15:41.151 "data_offset": 0, 00:15:41.151 "data_size": 63488 00:15:41.151 }, 00:15:41.151 { 00:15:41.151 "name": "BaseBdev3", 00:15:41.151 "uuid": "0da50a57-4915-4c98-9146-4fa674c3ea1e", 00:15:41.151 "is_configured": true, 00:15:41.151 "data_offset": 2048, 00:15:41.151 "data_size": 63488 00:15:41.151 } 00:15:41.151 ] 00:15:41.151 }' 00:15:41.151 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.151 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.718 [2024-12-06 13:10:28.594210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.718 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.719 "name": "Existed_Raid", 00:15:41.719 "uuid": "5eca4590-b351-41f4-92db-85a1637051b7", 00:15:41.719 "strip_size_kb": 0, 00:15:41.719 "state": "configuring", 00:15:41.719 "raid_level": "raid1", 00:15:41.719 "superblock": true, 00:15:41.719 "num_base_bdevs": 3, 00:15:41.719 "num_base_bdevs_discovered": 2, 00:15:41.719 "num_base_bdevs_operational": 3, 00:15:41.719 "base_bdevs_list": [ 00:15:41.719 { 00:15:41.719 "name": null, 00:15:41.719 "uuid": "9869c942-3987-456d-b65b-a89454b49005", 00:15:41.719 "is_configured": false, 00:15:41.719 "data_offset": 0, 00:15:41.719 "data_size": 63488 00:15:41.719 }, 00:15:41.719 { 00:15:41.719 "name": "BaseBdev2", 00:15:41.719 "uuid": "7aa70e1a-7fd8-4e89-a92c-b13ffc78b45a", 00:15:41.719 "is_configured": true, 00:15:41.719 "data_offset": 2048, 00:15:41.719 "data_size": 63488 00:15:41.719 }, 00:15:41.719 { 00:15:41.719 "name": "BaseBdev3", 00:15:41.719 "uuid": "0da50a57-4915-4c98-9146-4fa674c3ea1e", 00:15:41.719 "is_configured": true, 00:15:41.719 "data_offset": 2048, 00:15:41.719 "data_size": 63488 00:15:41.719 } 00:15:41.719 ] 00:15:41.719 }' 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.719 13:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9869c942-3987-456d-b65b-a89454b49005 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.286 [2024-12-06 13:10:29.256050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:42.286 [2024-12-06 13:10:29.256333] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:42.286 [2024-12-06 13:10:29.256351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:42.286 [2024-12-06 13:10:29.256750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:42.286 [2024-12-06 13:10:29.256935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:42.286 [2024-12-06 13:10:29.256960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:42.286 [2024-12-06 13:10:29.257118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.286 NewBaseBdev 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.286 [ 00:15:42.286 { 00:15:42.286 "name": "NewBaseBdev", 00:15:42.286 "aliases": [ 00:15:42.286 "9869c942-3987-456d-b65b-a89454b49005" 00:15:42.286 ], 00:15:42.286 "product_name": "Malloc disk", 00:15:42.286 "block_size": 512, 00:15:42.286 "num_blocks": 65536, 00:15:42.286 "uuid": "9869c942-3987-456d-b65b-a89454b49005", 00:15:42.286 "assigned_rate_limits": { 00:15:42.286 "rw_ios_per_sec": 0, 00:15:42.286 "rw_mbytes_per_sec": 0, 00:15:42.286 "r_mbytes_per_sec": 0, 00:15:42.286 "w_mbytes_per_sec": 0 00:15:42.286 }, 00:15:42.286 "claimed": true, 00:15:42.286 "claim_type": "exclusive_write", 00:15:42.286 "zoned": false, 00:15:42.286 "supported_io_types": { 00:15:42.286 "read": true, 00:15:42.286 "write": true, 00:15:42.286 "unmap": true, 00:15:42.286 "flush": true, 00:15:42.286 "reset": true, 00:15:42.286 "nvme_admin": false, 00:15:42.286 "nvme_io": false, 00:15:42.286 "nvme_io_md": false, 00:15:42.286 "write_zeroes": true, 00:15:42.286 "zcopy": true, 00:15:42.286 "get_zone_info": false, 00:15:42.286 "zone_management": false, 00:15:42.286 "zone_append": false, 00:15:42.286 "compare": false, 00:15:42.286 "compare_and_write": false, 00:15:42.286 "abort": true, 00:15:42.286 "seek_hole": false, 00:15:42.286 "seek_data": false, 00:15:42.286 "copy": true, 00:15:42.286 "nvme_iov_md": false 00:15:42.286 }, 00:15:42.286 "memory_domains": [ 00:15:42.286 { 00:15:42.286 "dma_device_id": "system", 00:15:42.286 "dma_device_type": 1 00:15:42.286 }, 00:15:42.286 { 00:15:42.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.286 "dma_device_type": 2 00:15:42.286 } 00:15:42.286 ], 00:15:42.286 "driver_specific": {} 00:15:42.286 } 00:15:42.286 ] 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.286 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.287 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.287 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.287 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.287 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.287 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.545 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.545 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.545 "name": "Existed_Raid", 00:15:42.545 "uuid": "5eca4590-b351-41f4-92db-85a1637051b7", 00:15:42.545 "strip_size_kb": 0, 00:15:42.545 "state": "online", 00:15:42.545 "raid_level": "raid1", 00:15:42.545 "superblock": true, 00:15:42.545 "num_base_bdevs": 3, 00:15:42.545 "num_base_bdevs_discovered": 3, 00:15:42.545 "num_base_bdevs_operational": 3, 00:15:42.545 "base_bdevs_list": [ 00:15:42.545 { 00:15:42.545 "name": "NewBaseBdev", 00:15:42.545 "uuid": "9869c942-3987-456d-b65b-a89454b49005", 00:15:42.545 "is_configured": true, 00:15:42.545 "data_offset": 2048, 00:15:42.545 "data_size": 63488 00:15:42.545 }, 00:15:42.545 { 00:15:42.545 "name": "BaseBdev2", 00:15:42.545 "uuid": "7aa70e1a-7fd8-4e89-a92c-b13ffc78b45a", 00:15:42.545 "is_configured": true, 00:15:42.545 "data_offset": 2048, 00:15:42.545 "data_size": 63488 00:15:42.545 }, 00:15:42.545 { 00:15:42.545 "name": "BaseBdev3", 00:15:42.545 "uuid": "0da50a57-4915-4c98-9146-4fa674c3ea1e", 00:15:42.545 "is_configured": true, 00:15:42.545 "data_offset": 2048, 00:15:42.545 "data_size": 63488 00:15:42.545 } 00:15:42.545 ] 00:15:42.545 }' 00:15:42.545 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.545 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.112 [2024-12-06 13:10:29.832644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:43.112 "name": "Existed_Raid", 00:15:43.112 "aliases": [ 00:15:43.112 "5eca4590-b351-41f4-92db-85a1637051b7" 00:15:43.112 ], 00:15:43.112 "product_name": "Raid Volume", 00:15:43.112 "block_size": 512, 00:15:43.112 "num_blocks": 63488, 00:15:43.112 "uuid": "5eca4590-b351-41f4-92db-85a1637051b7", 00:15:43.112 "assigned_rate_limits": { 00:15:43.112 "rw_ios_per_sec": 0, 00:15:43.112 "rw_mbytes_per_sec": 0, 00:15:43.112 "r_mbytes_per_sec": 0, 00:15:43.112 "w_mbytes_per_sec": 0 00:15:43.112 }, 00:15:43.112 "claimed": false, 00:15:43.112 "zoned": false, 00:15:43.112 "supported_io_types": { 00:15:43.112 "read": true, 00:15:43.112 "write": true, 00:15:43.112 "unmap": false, 00:15:43.112 "flush": false, 00:15:43.112 "reset": true, 00:15:43.112 "nvme_admin": false, 00:15:43.112 "nvme_io": false, 00:15:43.112 "nvme_io_md": false, 00:15:43.112 "write_zeroes": true, 00:15:43.112 "zcopy": false, 00:15:43.112 "get_zone_info": false, 00:15:43.112 "zone_management": false, 00:15:43.112 "zone_append": false, 00:15:43.112 "compare": false, 00:15:43.112 "compare_and_write": false, 00:15:43.112 "abort": false, 00:15:43.112 "seek_hole": false, 00:15:43.112 "seek_data": false, 00:15:43.112 "copy": false, 00:15:43.112 "nvme_iov_md": false 00:15:43.112 }, 00:15:43.112 "memory_domains": [ 00:15:43.112 { 00:15:43.112 "dma_device_id": "system", 00:15:43.112 "dma_device_type": 1 00:15:43.112 }, 00:15:43.112 { 00:15:43.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.112 "dma_device_type": 2 00:15:43.112 }, 00:15:43.112 { 00:15:43.112 "dma_device_id": "system", 00:15:43.112 "dma_device_type": 1 00:15:43.112 }, 00:15:43.112 { 00:15:43.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.112 "dma_device_type": 2 00:15:43.112 }, 00:15:43.112 { 00:15:43.112 "dma_device_id": "system", 00:15:43.112 "dma_device_type": 1 00:15:43.112 }, 00:15:43.112 { 00:15:43.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.112 "dma_device_type": 2 00:15:43.112 } 00:15:43.112 ], 00:15:43.112 "driver_specific": { 00:15:43.112 "raid": { 00:15:43.112 "uuid": "5eca4590-b351-41f4-92db-85a1637051b7", 00:15:43.112 "strip_size_kb": 0, 00:15:43.112 "state": "online", 00:15:43.112 "raid_level": "raid1", 00:15:43.112 "superblock": true, 00:15:43.112 "num_base_bdevs": 3, 00:15:43.112 "num_base_bdevs_discovered": 3, 00:15:43.112 "num_base_bdevs_operational": 3, 00:15:43.112 "base_bdevs_list": [ 00:15:43.112 { 00:15:43.112 "name": "NewBaseBdev", 00:15:43.112 "uuid": "9869c942-3987-456d-b65b-a89454b49005", 00:15:43.112 "is_configured": true, 00:15:43.112 "data_offset": 2048, 00:15:43.112 "data_size": 63488 00:15:43.112 }, 00:15:43.112 { 00:15:43.112 "name": "BaseBdev2", 00:15:43.112 "uuid": "7aa70e1a-7fd8-4e89-a92c-b13ffc78b45a", 00:15:43.112 "is_configured": true, 00:15:43.112 "data_offset": 2048, 00:15:43.112 "data_size": 63488 00:15:43.112 }, 00:15:43.112 { 00:15:43.112 "name": "BaseBdev3", 00:15:43.112 "uuid": "0da50a57-4915-4c98-9146-4fa674c3ea1e", 00:15:43.112 "is_configured": true, 00:15:43.112 "data_offset": 2048, 00:15:43.112 "data_size": 63488 00:15:43.112 } 00:15:43.112 ] 00:15:43.112 } 00:15:43.112 } 00:15:43.112 }' 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:43.112 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:43.112 BaseBdev2 00:15:43.112 BaseBdev3' 00:15:43.113 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.113 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:43.113 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.113 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:43.113 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.113 13:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.113 13:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.113 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.371 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.371 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.371 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.371 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.372 [2024-12-06 13:10:30.148322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.372 [2024-12-06 13:10:30.148367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.372 [2024-12-06 13:10:30.148450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.372 [2024-12-06 13:10:30.148819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.372 [2024-12-06 13:10:30.148837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68327 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68327 ']' 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68327 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68327 00:15:43.372 killing process with pid 68327 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68327' 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68327 00:15:43.372 [2024-12-06 13:10:30.184487] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.372 13:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68327 00:15:43.630 [2024-12-06 13:10:30.448284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.570 13:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:44.570 00:15:44.570 real 0m11.911s 00:15:44.570 user 0m19.749s 00:15:44.570 sys 0m1.660s 00:15:44.570 ************************************ 00:15:44.570 END TEST raid_state_function_test_sb 00:15:44.570 ************************************ 00:15:44.570 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.570 13:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.570 13:10:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:15:44.570 13:10:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:44.570 13:10:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.570 13:10:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.570 ************************************ 00:15:44.570 START TEST raid_superblock_test 00:15:44.570 ************************************ 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:44.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68964 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68964 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68964 ']' 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.570 13:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.829 [2024-12-06 13:10:31.649083] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:15:44.829 [2024-12-06 13:10:31.649244] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68964 ] 00:15:44.829 [2024-12-06 13:10:31.824727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.088 [2024-12-06 13:10:31.953619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.347 [2024-12-06 13:10:32.157542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.347 [2024-12-06 13:10:32.157615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.916 malloc1 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.916 [2024-12-06 13:10:32.694067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:45.916 [2024-12-06 13:10:32.694367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.916 [2024-12-06 13:10:32.694444] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:45.916 [2024-12-06 13:10:32.694597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.916 [2024-12-06 13:10:32.697426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.916 [2024-12-06 13:10:32.697611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:45.916 pt1 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.916 malloc2 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.916 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.916 [2024-12-06 13:10:32.750085] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:45.916 [2024-12-06 13:10:32.750313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.916 [2024-12-06 13:10:32.750357] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:45.916 [2024-12-06 13:10:32.750375] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.916 [2024-12-06 13:10:32.753183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.917 [2024-12-06 13:10:32.753231] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:45.917 pt2 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.917 malloc3 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.917 [2024-12-06 13:10:32.822593] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:45.917 [2024-12-06 13:10:32.822676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.917 [2024-12-06 13:10:32.822711] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:45.917 [2024-12-06 13:10:32.822728] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.917 [2024-12-06 13:10:32.825422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.917 [2024-12-06 13:10:32.825631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:45.917 pt3 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.917 [2024-12-06 13:10:32.834686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:45.917 [2024-12-06 13:10:32.837102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.917 [2024-12-06 13:10:32.837348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:45.917 [2024-12-06 13:10:32.837592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:45.917 [2024-12-06 13:10:32.837621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:45.917 [2024-12-06 13:10:32.837925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:45.917 [2024-12-06 13:10:32.838147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:45.917 [2024-12-06 13:10:32.838166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:45.917 [2024-12-06 13:10:32.838345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.917 "name": "raid_bdev1", 00:15:45.917 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:45.917 "strip_size_kb": 0, 00:15:45.917 "state": "online", 00:15:45.917 "raid_level": "raid1", 00:15:45.917 "superblock": true, 00:15:45.917 "num_base_bdevs": 3, 00:15:45.917 "num_base_bdevs_discovered": 3, 00:15:45.917 "num_base_bdevs_operational": 3, 00:15:45.917 "base_bdevs_list": [ 00:15:45.917 { 00:15:45.917 "name": "pt1", 00:15:45.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.917 "is_configured": true, 00:15:45.917 "data_offset": 2048, 00:15:45.917 "data_size": 63488 00:15:45.917 }, 00:15:45.917 { 00:15:45.917 "name": "pt2", 00:15:45.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.917 "is_configured": true, 00:15:45.917 "data_offset": 2048, 00:15:45.917 "data_size": 63488 00:15:45.917 }, 00:15:45.917 { 00:15:45.917 "name": "pt3", 00:15:45.917 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.917 "is_configured": true, 00:15:45.917 "data_offset": 2048, 00:15:45.917 "data_size": 63488 00:15:45.917 } 00:15:45.917 ] 00:15:45.917 }' 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.917 13:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.485 [2024-12-06 13:10:33.363321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:46.485 "name": "raid_bdev1", 00:15:46.485 "aliases": [ 00:15:46.485 "9ede3839-1e42-46d8-8b2d-cdfb73d3127b" 00:15:46.485 ], 00:15:46.485 "product_name": "Raid Volume", 00:15:46.485 "block_size": 512, 00:15:46.485 "num_blocks": 63488, 00:15:46.485 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:46.485 "assigned_rate_limits": { 00:15:46.485 "rw_ios_per_sec": 0, 00:15:46.485 "rw_mbytes_per_sec": 0, 00:15:46.485 "r_mbytes_per_sec": 0, 00:15:46.485 "w_mbytes_per_sec": 0 00:15:46.485 }, 00:15:46.485 "claimed": false, 00:15:46.485 "zoned": false, 00:15:46.485 "supported_io_types": { 00:15:46.485 "read": true, 00:15:46.485 "write": true, 00:15:46.485 "unmap": false, 00:15:46.485 "flush": false, 00:15:46.485 "reset": true, 00:15:46.485 "nvme_admin": false, 00:15:46.485 "nvme_io": false, 00:15:46.485 "nvme_io_md": false, 00:15:46.485 "write_zeroes": true, 00:15:46.485 "zcopy": false, 00:15:46.485 "get_zone_info": false, 00:15:46.485 "zone_management": false, 00:15:46.485 "zone_append": false, 00:15:46.485 "compare": false, 00:15:46.485 "compare_and_write": false, 00:15:46.485 "abort": false, 00:15:46.485 "seek_hole": false, 00:15:46.485 "seek_data": false, 00:15:46.485 "copy": false, 00:15:46.485 "nvme_iov_md": false 00:15:46.485 }, 00:15:46.485 "memory_domains": [ 00:15:46.485 { 00:15:46.485 "dma_device_id": "system", 00:15:46.485 "dma_device_type": 1 00:15:46.485 }, 00:15:46.485 { 00:15:46.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.485 "dma_device_type": 2 00:15:46.485 }, 00:15:46.485 { 00:15:46.485 "dma_device_id": "system", 00:15:46.485 "dma_device_type": 1 00:15:46.485 }, 00:15:46.485 { 00:15:46.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.485 "dma_device_type": 2 00:15:46.485 }, 00:15:46.485 { 00:15:46.485 "dma_device_id": "system", 00:15:46.485 "dma_device_type": 1 00:15:46.485 }, 00:15:46.485 { 00:15:46.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.485 "dma_device_type": 2 00:15:46.485 } 00:15:46.485 ], 00:15:46.485 "driver_specific": { 00:15:46.485 "raid": { 00:15:46.485 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:46.485 "strip_size_kb": 0, 00:15:46.485 "state": "online", 00:15:46.485 "raid_level": "raid1", 00:15:46.485 "superblock": true, 00:15:46.485 "num_base_bdevs": 3, 00:15:46.485 "num_base_bdevs_discovered": 3, 00:15:46.485 "num_base_bdevs_operational": 3, 00:15:46.485 "base_bdevs_list": [ 00:15:46.485 { 00:15:46.485 "name": "pt1", 00:15:46.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.485 "is_configured": true, 00:15:46.485 "data_offset": 2048, 00:15:46.485 "data_size": 63488 00:15:46.485 }, 00:15:46.485 { 00:15:46.485 "name": "pt2", 00:15:46.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.485 "is_configured": true, 00:15:46.485 "data_offset": 2048, 00:15:46.485 "data_size": 63488 00:15:46.485 }, 00:15:46.485 { 00:15:46.485 "name": "pt3", 00:15:46.485 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.485 "is_configured": true, 00:15:46.485 "data_offset": 2048, 00:15:46.485 "data_size": 63488 00:15:46.485 } 00:15:46.485 ] 00:15:46.485 } 00:15:46.485 } 00:15:46.485 }' 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:46.485 pt2 00:15:46.485 pt3' 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.485 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.744 [2024-12-06 13:10:33.683297] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9ede3839-1e42-46d8-8b2d-cdfb73d3127b 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9ede3839-1e42-46d8-8b2d-cdfb73d3127b ']' 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.744 [2024-12-06 13:10:33.735013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.744 [2024-12-06 13:10:33.735197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.744 [2024-12-06 13:10:33.735309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.744 [2024-12-06 13:10:33.735403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.744 [2024-12-06 13:10:33.735419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.744 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:47.003 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.004 [2024-12-06 13:10:33.883097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:47.004 [2024-12-06 13:10:33.885732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:47.004 [2024-12-06 13:10:33.885807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:47.004 [2024-12-06 13:10:33.885877] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:47.004 [2024-12-06 13:10:33.885970] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:47.004 [2024-12-06 13:10:33.886003] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:47.004 [2024-12-06 13:10:33.886029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.004 [2024-12-06 13:10:33.886042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:47.004 request: 00:15:47.004 { 00:15:47.004 "name": "raid_bdev1", 00:15:47.004 "raid_level": "raid1", 00:15:47.004 "base_bdevs": [ 00:15:47.004 "malloc1", 00:15:47.004 "malloc2", 00:15:47.004 "malloc3" 00:15:47.004 ], 00:15:47.004 "superblock": false, 00:15:47.004 "method": "bdev_raid_create", 00:15:47.004 "req_id": 1 00:15:47.004 } 00:15:47.004 Got JSON-RPC error response 00:15:47.004 response: 00:15:47.004 { 00:15:47.004 "code": -17, 00:15:47.004 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:47.004 } 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.004 [2024-12-06 13:10:33.951063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:47.004 [2024-12-06 13:10:33.951254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.004 [2024-12-06 13:10:33.951331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:47.004 [2024-12-06 13:10:33.951447] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.004 [2024-12-06 13:10:33.954376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.004 [2024-12-06 13:10:33.954556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:47.004 [2024-12-06 13:10:33.954813] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:47.004 [2024-12-06 13:10:33.955000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:47.004 pt1 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.004 13:10:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.004 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.004 "name": "raid_bdev1", 00:15:47.004 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:47.004 "strip_size_kb": 0, 00:15:47.004 "state": "configuring", 00:15:47.004 "raid_level": "raid1", 00:15:47.004 "superblock": true, 00:15:47.004 "num_base_bdevs": 3, 00:15:47.004 "num_base_bdevs_discovered": 1, 00:15:47.004 "num_base_bdevs_operational": 3, 00:15:47.004 "base_bdevs_list": [ 00:15:47.004 { 00:15:47.004 "name": "pt1", 00:15:47.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.004 "is_configured": true, 00:15:47.004 "data_offset": 2048, 00:15:47.004 "data_size": 63488 00:15:47.004 }, 00:15:47.004 { 00:15:47.004 "name": null, 00:15:47.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.004 "is_configured": false, 00:15:47.004 "data_offset": 2048, 00:15:47.004 "data_size": 63488 00:15:47.004 }, 00:15:47.004 { 00:15:47.004 "name": null, 00:15:47.004 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.004 "is_configured": false, 00:15:47.004 "data_offset": 2048, 00:15:47.004 "data_size": 63488 00:15:47.004 } 00:15:47.004 ] 00:15:47.004 }' 00:15:47.004 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.004 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.572 [2024-12-06 13:10:34.439518] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:47.572 [2024-12-06 13:10:34.439611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.572 [2024-12-06 13:10:34.439648] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:47.572 [2024-12-06 13:10:34.439666] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.572 [2024-12-06 13:10:34.440226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.572 [2024-12-06 13:10:34.440250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:47.572 [2024-12-06 13:10:34.440354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:47.572 [2024-12-06 13:10:34.440386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.572 pt2 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.572 [2024-12-06 13:10:34.447495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.572 "name": "raid_bdev1", 00:15:47.572 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:47.572 "strip_size_kb": 0, 00:15:47.572 "state": "configuring", 00:15:47.572 "raid_level": "raid1", 00:15:47.572 "superblock": true, 00:15:47.572 "num_base_bdevs": 3, 00:15:47.572 "num_base_bdevs_discovered": 1, 00:15:47.572 "num_base_bdevs_operational": 3, 00:15:47.572 "base_bdevs_list": [ 00:15:47.572 { 00:15:47.572 "name": "pt1", 00:15:47.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.572 "is_configured": true, 00:15:47.572 "data_offset": 2048, 00:15:47.572 "data_size": 63488 00:15:47.572 }, 00:15:47.572 { 00:15:47.572 "name": null, 00:15:47.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.572 "is_configured": false, 00:15:47.572 "data_offset": 0, 00:15:47.572 "data_size": 63488 00:15:47.572 }, 00:15:47.572 { 00:15:47.572 "name": null, 00:15:47.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.572 "is_configured": false, 00:15:47.572 "data_offset": 2048, 00:15:47.572 "data_size": 63488 00:15:47.572 } 00:15:47.572 ] 00:15:47.572 }' 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.572 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.139 [2024-12-06 13:10:34.967642] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.139 [2024-12-06 13:10:34.967738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.139 [2024-12-06 13:10:34.967770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:48.139 [2024-12-06 13:10:34.967808] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.139 [2024-12-06 13:10:34.968432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.139 [2024-12-06 13:10:34.968488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.139 [2024-12-06 13:10:34.968608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:48.139 [2024-12-06 13:10:34.968658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.139 pt2 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.139 [2024-12-06 13:10:34.975607] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:48.139 [2024-12-06 13:10:34.975664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.139 [2024-12-06 13:10:34.975688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:48.139 [2024-12-06 13:10:34.975706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.139 [2024-12-06 13:10:34.976145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.139 [2024-12-06 13:10:34.976210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:48.139 [2024-12-06 13:10:34.976304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:48.139 [2024-12-06 13:10:34.976337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:48.139 [2024-12-06 13:10:34.976516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:48.139 [2024-12-06 13:10:34.976547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:48.139 [2024-12-06 13:10:34.976852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:48.139 [2024-12-06 13:10:34.977048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:48.139 [2024-12-06 13:10:34.977070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:48.139 [2024-12-06 13:10:34.977241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.139 pt3 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.139 13:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.139 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.139 "name": "raid_bdev1", 00:15:48.139 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:48.139 "strip_size_kb": 0, 00:15:48.139 "state": "online", 00:15:48.139 "raid_level": "raid1", 00:15:48.139 "superblock": true, 00:15:48.139 "num_base_bdevs": 3, 00:15:48.139 "num_base_bdevs_discovered": 3, 00:15:48.139 "num_base_bdevs_operational": 3, 00:15:48.139 "base_bdevs_list": [ 00:15:48.139 { 00:15:48.139 "name": "pt1", 00:15:48.139 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.139 "is_configured": true, 00:15:48.139 "data_offset": 2048, 00:15:48.139 "data_size": 63488 00:15:48.139 }, 00:15:48.139 { 00:15:48.139 "name": "pt2", 00:15:48.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.139 "is_configured": true, 00:15:48.139 "data_offset": 2048, 00:15:48.139 "data_size": 63488 00:15:48.139 }, 00:15:48.139 { 00:15:48.139 "name": "pt3", 00:15:48.139 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.139 "is_configured": true, 00:15:48.139 "data_offset": 2048, 00:15:48.139 "data_size": 63488 00:15:48.139 } 00:15:48.139 ] 00:15:48.139 }' 00:15:48.139 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.139 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.703 [2024-12-06 13:10:35.476155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.703 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.703 "name": "raid_bdev1", 00:15:48.703 "aliases": [ 00:15:48.703 "9ede3839-1e42-46d8-8b2d-cdfb73d3127b" 00:15:48.703 ], 00:15:48.703 "product_name": "Raid Volume", 00:15:48.703 "block_size": 512, 00:15:48.703 "num_blocks": 63488, 00:15:48.703 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:48.703 "assigned_rate_limits": { 00:15:48.703 "rw_ios_per_sec": 0, 00:15:48.703 "rw_mbytes_per_sec": 0, 00:15:48.703 "r_mbytes_per_sec": 0, 00:15:48.703 "w_mbytes_per_sec": 0 00:15:48.703 }, 00:15:48.703 "claimed": false, 00:15:48.703 "zoned": false, 00:15:48.703 "supported_io_types": { 00:15:48.703 "read": true, 00:15:48.703 "write": true, 00:15:48.703 "unmap": false, 00:15:48.703 "flush": false, 00:15:48.703 "reset": true, 00:15:48.703 "nvme_admin": false, 00:15:48.703 "nvme_io": false, 00:15:48.703 "nvme_io_md": false, 00:15:48.703 "write_zeroes": true, 00:15:48.703 "zcopy": false, 00:15:48.703 "get_zone_info": false, 00:15:48.703 "zone_management": false, 00:15:48.703 "zone_append": false, 00:15:48.703 "compare": false, 00:15:48.703 "compare_and_write": false, 00:15:48.703 "abort": false, 00:15:48.703 "seek_hole": false, 00:15:48.703 "seek_data": false, 00:15:48.703 "copy": false, 00:15:48.703 "nvme_iov_md": false 00:15:48.703 }, 00:15:48.703 "memory_domains": [ 00:15:48.703 { 00:15:48.703 "dma_device_id": "system", 00:15:48.703 "dma_device_type": 1 00:15:48.703 }, 00:15:48.703 { 00:15:48.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.703 "dma_device_type": 2 00:15:48.703 }, 00:15:48.703 { 00:15:48.703 "dma_device_id": "system", 00:15:48.703 "dma_device_type": 1 00:15:48.703 }, 00:15:48.703 { 00:15:48.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.703 "dma_device_type": 2 00:15:48.703 }, 00:15:48.703 { 00:15:48.703 "dma_device_id": "system", 00:15:48.703 "dma_device_type": 1 00:15:48.703 }, 00:15:48.703 { 00:15:48.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.704 "dma_device_type": 2 00:15:48.704 } 00:15:48.704 ], 00:15:48.704 "driver_specific": { 00:15:48.704 "raid": { 00:15:48.704 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:48.704 "strip_size_kb": 0, 00:15:48.704 "state": "online", 00:15:48.704 "raid_level": "raid1", 00:15:48.704 "superblock": true, 00:15:48.704 "num_base_bdevs": 3, 00:15:48.704 "num_base_bdevs_discovered": 3, 00:15:48.704 "num_base_bdevs_operational": 3, 00:15:48.704 "base_bdevs_list": [ 00:15:48.704 { 00:15:48.704 "name": "pt1", 00:15:48.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.704 "is_configured": true, 00:15:48.704 "data_offset": 2048, 00:15:48.704 "data_size": 63488 00:15:48.704 }, 00:15:48.704 { 00:15:48.704 "name": "pt2", 00:15:48.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.704 "is_configured": true, 00:15:48.704 "data_offset": 2048, 00:15:48.704 "data_size": 63488 00:15:48.704 }, 00:15:48.704 { 00:15:48.704 "name": "pt3", 00:15:48.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.704 "is_configured": true, 00:15:48.704 "data_offset": 2048, 00:15:48.704 "data_size": 63488 00:15:48.704 } 00:15:48.704 ] 00:15:48.704 } 00:15:48.704 } 00:15:48.704 }' 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:48.704 pt2 00:15:48.704 pt3' 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.704 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.961 [2024-12-06 13:10:35.768206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9ede3839-1e42-46d8-8b2d-cdfb73d3127b '!=' 9ede3839-1e42-46d8-8b2d-cdfb73d3127b ']' 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.961 [2024-12-06 13:10:35.811916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.961 "name": "raid_bdev1", 00:15:48.961 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:48.961 "strip_size_kb": 0, 00:15:48.961 "state": "online", 00:15:48.961 "raid_level": "raid1", 00:15:48.961 "superblock": true, 00:15:48.961 "num_base_bdevs": 3, 00:15:48.961 "num_base_bdevs_discovered": 2, 00:15:48.961 "num_base_bdevs_operational": 2, 00:15:48.961 "base_bdevs_list": [ 00:15:48.961 { 00:15:48.961 "name": null, 00:15:48.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.961 "is_configured": false, 00:15:48.961 "data_offset": 0, 00:15:48.961 "data_size": 63488 00:15:48.961 }, 00:15:48.961 { 00:15:48.961 "name": "pt2", 00:15:48.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.961 "is_configured": true, 00:15:48.961 "data_offset": 2048, 00:15:48.961 "data_size": 63488 00:15:48.961 }, 00:15:48.961 { 00:15:48.961 "name": "pt3", 00:15:48.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.961 "is_configured": true, 00:15:48.961 "data_offset": 2048, 00:15:48.961 "data_size": 63488 00:15:48.961 } 00:15:48.961 ] 00:15:48.961 }' 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.961 13:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.527 [2024-12-06 13:10:36.324034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.527 [2024-12-06 13:10:36.324073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.527 [2024-12-06 13:10:36.324167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.527 [2024-12-06 13:10:36.324241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.527 [2024-12-06 13:10:36.324263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.527 [2024-12-06 13:10:36.412013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.527 [2024-12-06 13:10:36.412096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.527 [2024-12-06 13:10:36.412123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:49.527 [2024-12-06 13:10:36.412142] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.527 [2024-12-06 13:10:36.415089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.527 [2024-12-06 13:10:36.415139] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.527 [2024-12-06 13:10:36.415238] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:49.527 [2024-12-06 13:10:36.415301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.527 pt2 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.527 "name": "raid_bdev1", 00:15:49.527 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:49.527 "strip_size_kb": 0, 00:15:49.527 "state": "configuring", 00:15:49.527 "raid_level": "raid1", 00:15:49.527 "superblock": true, 00:15:49.527 "num_base_bdevs": 3, 00:15:49.527 "num_base_bdevs_discovered": 1, 00:15:49.527 "num_base_bdevs_operational": 2, 00:15:49.527 "base_bdevs_list": [ 00:15:49.527 { 00:15:49.527 "name": null, 00:15:49.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.527 "is_configured": false, 00:15:49.527 "data_offset": 2048, 00:15:49.527 "data_size": 63488 00:15:49.527 }, 00:15:49.527 { 00:15:49.527 "name": "pt2", 00:15:49.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.527 "is_configured": true, 00:15:49.527 "data_offset": 2048, 00:15:49.527 "data_size": 63488 00:15:49.527 }, 00:15:49.527 { 00:15:49.527 "name": null, 00:15:49.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.527 "is_configured": false, 00:15:49.527 "data_offset": 2048, 00:15:49.527 "data_size": 63488 00:15:49.527 } 00:15:49.527 ] 00:15:49.527 }' 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.527 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.093 [2024-12-06 13:10:36.960207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:50.093 [2024-12-06 13:10:36.960306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.093 [2024-12-06 13:10:36.960340] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:50.093 [2024-12-06 13:10:36.960360] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.093 [2024-12-06 13:10:36.960964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.093 [2024-12-06 13:10:36.961009] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:50.093 [2024-12-06 13:10:36.961129] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:50.093 [2024-12-06 13:10:36.961172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:50.093 [2024-12-06 13:10:36.961317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:50.093 [2024-12-06 13:10:36.961346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:50.093 [2024-12-06 13:10:36.961709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:50.093 [2024-12-06 13:10:36.961914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:50.093 [2024-12-06 13:10:36.961943] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:50.093 [2024-12-06 13:10:36.962117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.093 pt3 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.093 13:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.093 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.093 "name": "raid_bdev1", 00:15:50.093 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:50.093 "strip_size_kb": 0, 00:15:50.093 "state": "online", 00:15:50.093 "raid_level": "raid1", 00:15:50.093 "superblock": true, 00:15:50.093 "num_base_bdevs": 3, 00:15:50.093 "num_base_bdevs_discovered": 2, 00:15:50.093 "num_base_bdevs_operational": 2, 00:15:50.093 "base_bdevs_list": [ 00:15:50.093 { 00:15:50.093 "name": null, 00:15:50.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.093 "is_configured": false, 00:15:50.093 "data_offset": 2048, 00:15:50.093 "data_size": 63488 00:15:50.093 }, 00:15:50.093 { 00:15:50.093 "name": "pt2", 00:15:50.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.093 "is_configured": true, 00:15:50.093 "data_offset": 2048, 00:15:50.093 "data_size": 63488 00:15:50.093 }, 00:15:50.093 { 00:15:50.093 "name": "pt3", 00:15:50.093 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.093 "is_configured": true, 00:15:50.093 "data_offset": 2048, 00:15:50.093 "data_size": 63488 00:15:50.093 } 00:15:50.093 ] 00:15:50.093 }' 00:15:50.093 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.093 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 [2024-12-06 13:10:37.532357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.660 [2024-12-06 13:10:37.532403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.660 [2024-12-06 13:10:37.532542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.660 [2024-12-06 13:10:37.532642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.660 [2024-12-06 13:10:37.532658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.660 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 [2024-12-06 13:10:37.600368] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:50.660 [2024-12-06 13:10:37.600435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.660 [2024-12-06 13:10:37.600479] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:50.660 [2024-12-06 13:10:37.600497] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.660 [2024-12-06 13:10:37.603305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.661 [2024-12-06 13:10:37.603357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:50.661 [2024-12-06 13:10:37.603461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:50.661 [2024-12-06 13:10:37.603536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:50.661 [2024-12-06 13:10:37.603725] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:50.661 [2024-12-06 13:10:37.603744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.661 [2024-12-06 13:10:37.603765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:50.661 [2024-12-06 13:10:37.603844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.661 pt1 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.661 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.918 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.918 "name": "raid_bdev1", 00:15:50.918 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:50.918 "strip_size_kb": 0, 00:15:50.918 "state": "configuring", 00:15:50.918 "raid_level": "raid1", 00:15:50.918 "superblock": true, 00:15:50.918 "num_base_bdevs": 3, 00:15:50.918 "num_base_bdevs_discovered": 1, 00:15:50.918 "num_base_bdevs_operational": 2, 00:15:50.918 "base_bdevs_list": [ 00:15:50.918 { 00:15:50.918 "name": null, 00:15:50.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.918 "is_configured": false, 00:15:50.918 "data_offset": 2048, 00:15:50.918 "data_size": 63488 00:15:50.918 }, 00:15:50.918 { 00:15:50.918 "name": "pt2", 00:15:50.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.918 "is_configured": true, 00:15:50.918 "data_offset": 2048, 00:15:50.918 "data_size": 63488 00:15:50.918 }, 00:15:50.918 { 00:15:50.918 "name": null, 00:15:50.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.918 "is_configured": false, 00:15:50.918 "data_offset": 2048, 00:15:50.918 "data_size": 63488 00:15:50.918 } 00:15:50.918 ] 00:15:50.918 }' 00:15:50.918 13:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.918 13:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.176 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:51.176 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.176 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.176 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:51.176 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.435 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:51.435 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:51.435 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.435 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.435 [2024-12-06 13:10:38.216594] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:51.435 [2024-12-06 13:10:38.216701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.435 [2024-12-06 13:10:38.216754] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:51.435 [2024-12-06 13:10:38.216772] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.435 [2024-12-06 13:10:38.217425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.435 [2024-12-06 13:10:38.217488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:51.435 [2024-12-06 13:10:38.217593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:51.435 [2024-12-06 13:10:38.217626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:51.435 [2024-12-06 13:10:38.217780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:51.435 [2024-12-06 13:10:38.217796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:51.435 [2024-12-06 13:10:38.218117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:51.435 [2024-12-06 13:10:38.218341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:51.435 [2024-12-06 13:10:38.218371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:51.435 [2024-12-06 13:10:38.218565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.435 pt3 00:15:51.435 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.435 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.436 "name": "raid_bdev1", 00:15:51.436 "uuid": "9ede3839-1e42-46d8-8b2d-cdfb73d3127b", 00:15:51.436 "strip_size_kb": 0, 00:15:51.436 "state": "online", 00:15:51.436 "raid_level": "raid1", 00:15:51.436 "superblock": true, 00:15:51.436 "num_base_bdevs": 3, 00:15:51.436 "num_base_bdevs_discovered": 2, 00:15:51.436 "num_base_bdevs_operational": 2, 00:15:51.436 "base_bdevs_list": [ 00:15:51.436 { 00:15:51.436 "name": null, 00:15:51.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.436 "is_configured": false, 00:15:51.436 "data_offset": 2048, 00:15:51.436 "data_size": 63488 00:15:51.436 }, 00:15:51.436 { 00:15:51.436 "name": "pt2", 00:15:51.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.436 "is_configured": true, 00:15:51.436 "data_offset": 2048, 00:15:51.436 "data_size": 63488 00:15:51.436 }, 00:15:51.436 { 00:15:51.436 "name": "pt3", 00:15:51.436 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.436 "is_configured": true, 00:15:51.436 "data_offset": 2048, 00:15:51.436 "data_size": 63488 00:15:51.436 } 00:15:51.436 ] 00:15:51.436 }' 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.436 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.002 [2024-12-06 13:10:38.813139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9ede3839-1e42-46d8-8b2d-cdfb73d3127b '!=' 9ede3839-1e42-46d8-8b2d-cdfb73d3127b ']' 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68964 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68964 ']' 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68964 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68964 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.002 killing process with pid 68964 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68964' 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68964 00:15:52.002 [2024-12-06 13:10:38.887581] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.002 [2024-12-06 13:10:38.887679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.002 13:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68964 00:15:52.002 [2024-12-06 13:10:38.887758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.002 [2024-12-06 13:10:38.887777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:52.329 [2024-12-06 13:10:39.155540] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.267 13:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:53.267 00:15:53.267 real 0m8.653s 00:15:53.267 user 0m14.057s 00:15:53.267 sys 0m1.301s 00:15:53.267 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.267 ************************************ 00:15:53.267 END TEST raid_superblock_test 00:15:53.267 13:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.267 ************************************ 00:15:53.267 13:10:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:15:53.267 13:10:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:53.267 13:10:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.267 13:10:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.267 ************************************ 00:15:53.267 START TEST raid_read_error_test 00:15:53.267 ************************************ 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:53.267 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:53.526 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.X2oitDJHDK 00:15:53.526 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69417 00:15:53.526 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69417 00:15:53.526 13:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:53.526 13:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69417 ']' 00:15:53.526 13:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.526 13:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.526 13:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.526 13:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.526 13:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.526 [2024-12-06 13:10:40.401461] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:15:53.526 [2024-12-06 13:10:40.401692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69417 ] 00:15:53.785 [2024-12-06 13:10:40.591796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.786 [2024-12-06 13:10:40.723539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.045 [2024-12-06 13:10:40.927933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.045 [2024-12-06 13:10:40.928010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 BaseBdev1_malloc 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 true 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 [2024-12-06 13:10:41.453815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:54.613 [2024-12-06 13:10:41.453897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.613 [2024-12-06 13:10:41.453927] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:54.613 [2024-12-06 13:10:41.453946] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.613 [2024-12-06 13:10:41.456757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.613 [2024-12-06 13:10:41.456815] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.613 BaseBdev1 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 BaseBdev2_malloc 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 true 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 [2024-12-06 13:10:41.517603] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:54.613 [2024-12-06 13:10:41.517692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.613 [2024-12-06 13:10:41.517718] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:54.613 [2024-12-06 13:10:41.517736] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.613 [2024-12-06 13:10:41.520501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.613 [2024-12-06 13:10:41.520550] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:54.613 BaseBdev2 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 BaseBdev3_malloc 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 true 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 [2024-12-06 13:10:41.598289] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:54.613 [2024-12-06 13:10:41.598368] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.613 [2024-12-06 13:10:41.598395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:54.613 [2024-12-06 13:10:41.598413] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.613 [2024-12-06 13:10:41.601309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.613 [2024-12-06 13:10:41.601356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.613 BaseBdev3 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 [2024-12-06 13:10:41.606381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.613 [2024-12-06 13:10:41.608919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.613 [2024-12-06 13:10:41.609025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.613 [2024-12-06 13:10:41.609321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:54.613 [2024-12-06 13:10:41.609341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:54.613 [2024-12-06 13:10:41.609664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:54.613 [2024-12-06 13:10:41.609922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:54.613 [2024-12-06 13:10:41.609998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:54.613 [2024-12-06 13:10:41.610194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.613 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.614 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.614 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.614 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.614 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.614 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.614 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.614 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.614 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.614 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.614 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.614 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.872 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.872 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.872 "name": "raid_bdev1", 00:15:54.872 "uuid": "9e0f41aa-757e-4c30-ab53-97f6d2b8a8d7", 00:15:54.872 "strip_size_kb": 0, 00:15:54.872 "state": "online", 00:15:54.872 "raid_level": "raid1", 00:15:54.872 "superblock": true, 00:15:54.872 "num_base_bdevs": 3, 00:15:54.872 "num_base_bdevs_discovered": 3, 00:15:54.872 "num_base_bdevs_operational": 3, 00:15:54.872 "base_bdevs_list": [ 00:15:54.872 { 00:15:54.872 "name": "BaseBdev1", 00:15:54.872 "uuid": "392b8f09-3b1a-5c6a-b72c-3c70d47bec2a", 00:15:54.872 "is_configured": true, 00:15:54.872 "data_offset": 2048, 00:15:54.872 "data_size": 63488 00:15:54.872 }, 00:15:54.872 { 00:15:54.872 "name": "BaseBdev2", 00:15:54.872 "uuid": "ef79eb49-5b4c-5fbd-9567-cfbd15a2fbfe", 00:15:54.872 "is_configured": true, 00:15:54.872 "data_offset": 2048, 00:15:54.872 "data_size": 63488 00:15:54.872 }, 00:15:54.872 { 00:15:54.872 "name": "BaseBdev3", 00:15:54.872 "uuid": "58b3ea67-99ea-5772-8422-e1378c5328f3", 00:15:54.872 "is_configured": true, 00:15:54.872 "data_offset": 2048, 00:15:54.872 "data_size": 63488 00:15:54.872 } 00:15:54.872 ] 00:15:54.872 }' 00:15:54.872 13:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.872 13:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.131 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:55.131 13:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:55.389 [2024-12-06 13:10:42.267994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.324 "name": "raid_bdev1", 00:15:56.324 "uuid": "9e0f41aa-757e-4c30-ab53-97f6d2b8a8d7", 00:15:56.324 "strip_size_kb": 0, 00:15:56.324 "state": "online", 00:15:56.324 "raid_level": "raid1", 00:15:56.324 "superblock": true, 00:15:56.324 "num_base_bdevs": 3, 00:15:56.324 "num_base_bdevs_discovered": 3, 00:15:56.324 "num_base_bdevs_operational": 3, 00:15:56.324 "base_bdevs_list": [ 00:15:56.324 { 00:15:56.324 "name": "BaseBdev1", 00:15:56.324 "uuid": "392b8f09-3b1a-5c6a-b72c-3c70d47bec2a", 00:15:56.324 "is_configured": true, 00:15:56.324 "data_offset": 2048, 00:15:56.324 "data_size": 63488 00:15:56.324 }, 00:15:56.324 { 00:15:56.324 "name": "BaseBdev2", 00:15:56.324 "uuid": "ef79eb49-5b4c-5fbd-9567-cfbd15a2fbfe", 00:15:56.324 "is_configured": true, 00:15:56.324 "data_offset": 2048, 00:15:56.324 "data_size": 63488 00:15:56.324 }, 00:15:56.324 { 00:15:56.324 "name": "BaseBdev3", 00:15:56.324 "uuid": "58b3ea67-99ea-5772-8422-e1378c5328f3", 00:15:56.324 "is_configured": true, 00:15:56.324 "data_offset": 2048, 00:15:56.324 "data_size": 63488 00:15:56.324 } 00:15:56.324 ] 00:15:56.324 }' 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.324 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.893 [2024-12-06 13:10:43.700277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:56.893 [2024-12-06 13:10:43.700544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.893 [2024-12-06 13:10:43.704126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.893 { 00:15:56.893 "results": [ 00:15:56.893 { 00:15:56.893 "job": "raid_bdev1", 00:15:56.893 "core_mask": "0x1", 00:15:56.893 "workload": "randrw", 00:15:56.893 "percentage": 50, 00:15:56.893 "status": "finished", 00:15:56.893 "queue_depth": 1, 00:15:56.893 "io_size": 131072, 00:15:56.893 "runtime": 1.430156, 00:15:56.893 "iops": 9464.701752815778, 00:15:56.893 "mibps": 1183.0877191019722, 00:15:56.893 "io_failed": 0, 00:15:56.893 "io_timeout": 0, 00:15:56.893 "avg_latency_us": 101.38885880077369, 00:15:56.893 "min_latency_us": 43.054545454545455, 00:15:56.893 "max_latency_us": 1802.24 00:15:56.893 } 00:15:56.893 ], 00:15:56.893 "core_count": 1 00:15:56.893 } 00:15:56.893 [2024-12-06 13:10:43.704363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.893 [2024-12-06 13:10:43.704549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.893 [2024-12-06 13:10:43.704568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69417 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69417 ']' 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69417 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69417 00:15:56.893 killing process with pid 69417 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69417' 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69417 00:15:56.893 [2024-12-06 13:10:43.744708] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.893 13:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69417 00:15:57.152 [2024-12-06 13:10:43.944101] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.529 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.X2oitDJHDK 00:15:58.529 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:58.529 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:58.529 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:58.529 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:58.529 ************************************ 00:15:58.529 END TEST raid_read_error_test 00:15:58.529 ************************************ 00:15:58.529 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:58.529 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:58.529 13:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:58.529 00:15:58.529 real 0m4.854s 00:15:58.529 user 0m6.023s 00:15:58.529 sys 0m0.608s 00:15:58.529 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.529 13:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.529 13:10:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:15:58.529 13:10:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:58.529 13:10:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.529 13:10:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.529 ************************************ 00:15:58.529 START TEST raid_write_error_test 00:15:58.529 ************************************ 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JiBQf80YDJ 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69568 00:15:58.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69568 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69568 ']' 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.529 13:10:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.529 [2024-12-06 13:10:45.298482] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:15:58.529 [2024-12-06 13:10:45.298892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69568 ] 00:15:58.529 [2024-12-06 13:10:45.484823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.800 [2024-12-06 13:10:45.635054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.058 [2024-12-06 13:10:45.867892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.058 [2024-12-06 13:10:45.867971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.625 BaseBdev1_malloc 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.625 true 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.625 [2024-12-06 13:10:46.392255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:59.625 [2024-12-06 13:10:46.392384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.625 [2024-12-06 13:10:46.392417] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:59.625 [2024-12-06 13:10:46.392455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.625 [2024-12-06 13:10:46.395484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.625 [2024-12-06 13:10:46.395548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.625 BaseBdev1 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.625 BaseBdev2_malloc 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.625 true 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.625 [2024-12-06 13:10:46.450045] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:59.625 [2024-12-06 13:10:46.450146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.625 [2024-12-06 13:10:46.450179] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:59.625 [2024-12-06 13:10:46.450216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.625 [2024-12-06 13:10:46.453474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.625 [2024-12-06 13:10:46.453556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.625 BaseBdev2 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:59.625 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.626 BaseBdev3_malloc 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.626 true 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.626 [2024-12-06 13:10:46.515127] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:59.626 [2024-12-06 13:10:46.515243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.626 [2024-12-06 13:10:46.515273] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:59.626 [2024-12-06 13:10:46.515293] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.626 [2024-12-06 13:10:46.518093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.626 [2024-12-06 13:10:46.518413] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:59.626 BaseBdev3 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.626 [2024-12-06 13:10:46.523336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.626 [2024-12-06 13:10:46.525905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.626 [2024-12-06 13:10:46.526041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.626 [2024-12-06 13:10:46.526364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:59.626 [2024-12-06 13:10:46.526385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:59.626 [2024-12-06 13:10:46.526773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:59.626 [2024-12-06 13:10:46.527047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:59.626 [2024-12-06 13:10:46.527078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:59.626 [2024-12-06 13:10:46.527337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.626 "name": "raid_bdev1", 00:15:59.626 "uuid": "68b1bcdb-d9a4-44e9-8686-2951aac27d75", 00:15:59.626 "strip_size_kb": 0, 00:15:59.626 "state": "online", 00:15:59.626 "raid_level": "raid1", 00:15:59.626 "superblock": true, 00:15:59.626 "num_base_bdevs": 3, 00:15:59.626 "num_base_bdevs_discovered": 3, 00:15:59.626 "num_base_bdevs_operational": 3, 00:15:59.626 "base_bdevs_list": [ 00:15:59.626 { 00:15:59.626 "name": "BaseBdev1", 00:15:59.626 "uuid": "cdf92308-4977-58ba-81fd-f977e70fe353", 00:15:59.626 "is_configured": true, 00:15:59.626 "data_offset": 2048, 00:15:59.626 "data_size": 63488 00:15:59.626 }, 00:15:59.626 { 00:15:59.626 "name": "BaseBdev2", 00:15:59.626 "uuid": "2142c3f4-b4d7-53f2-bbf0-343443268536", 00:15:59.626 "is_configured": true, 00:15:59.626 "data_offset": 2048, 00:15:59.626 "data_size": 63488 00:15:59.626 }, 00:15:59.626 { 00:15:59.626 "name": "BaseBdev3", 00:15:59.626 "uuid": "129e2879-bf1c-5a61-8134-0a983677cb04", 00:15:59.626 "is_configured": true, 00:15:59.626 "data_offset": 2048, 00:15:59.626 "data_size": 63488 00:15:59.626 } 00:15:59.626 ] 00:15:59.626 }' 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.626 13:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.192 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:00.192 13:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:00.192 [2024-12-06 13:10:47.165498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.126 [2024-12-06 13:10:48.064295] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:01.126 [2024-12-06 13:10:48.064368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:01.126 [2024-12-06 13:10:48.064666] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.126 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.126 "name": "raid_bdev1", 00:16:01.126 "uuid": "68b1bcdb-d9a4-44e9-8686-2951aac27d75", 00:16:01.126 "strip_size_kb": 0, 00:16:01.126 "state": "online", 00:16:01.126 "raid_level": "raid1", 00:16:01.126 "superblock": true, 00:16:01.126 "num_base_bdevs": 3, 00:16:01.126 "num_base_bdevs_discovered": 2, 00:16:01.126 "num_base_bdevs_operational": 2, 00:16:01.126 "base_bdevs_list": [ 00:16:01.127 { 00:16:01.127 "name": null, 00:16:01.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.127 "is_configured": false, 00:16:01.127 "data_offset": 0, 00:16:01.127 "data_size": 63488 00:16:01.127 }, 00:16:01.127 { 00:16:01.127 "name": "BaseBdev2", 00:16:01.127 "uuid": "2142c3f4-b4d7-53f2-bbf0-343443268536", 00:16:01.127 "is_configured": true, 00:16:01.127 "data_offset": 2048, 00:16:01.127 "data_size": 63488 00:16:01.127 }, 00:16:01.127 { 00:16:01.127 "name": "BaseBdev3", 00:16:01.127 "uuid": "129e2879-bf1c-5a61-8134-0a983677cb04", 00:16:01.127 "is_configured": true, 00:16:01.127 "data_offset": 2048, 00:16:01.127 "data_size": 63488 00:16:01.127 } 00:16:01.127 ] 00:16:01.127 }' 00:16:01.127 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.127 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.693 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.693 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.693 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.693 [2024-12-06 13:10:48.614071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.693 [2024-12-06 13:10:48.614505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.693 { 00:16:01.693 "results": [ 00:16:01.693 { 00:16:01.693 "job": "raid_bdev1", 00:16:01.693 "core_mask": "0x1", 00:16:01.693 "workload": "randrw", 00:16:01.693 "percentage": 50, 00:16:01.693 "status": "finished", 00:16:01.693 "queue_depth": 1, 00:16:01.693 "io_size": 131072, 00:16:01.693 "runtime": 1.445476, 00:16:01.693 "iops": 8489.245065293371, 00:16:01.693 "mibps": 1061.1556331616714, 00:16:01.693 "io_failed": 0, 00:16:01.693 "io_timeout": 0, 00:16:01.693 "avg_latency_us": 112.69722879516377, 00:16:01.693 "min_latency_us": 42.82181818181818, 00:16:01.693 "max_latency_us": 1712.8727272727272 00:16:01.693 } 00:16:01.693 ], 00:16:01.693 "core_count": 1 00:16:01.693 } 00:16:01.694 [2024-12-06 13:10:48.618663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.694 [2024-12-06 13:10:48.618843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.694 [2024-12-06 13:10:48.618980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.694 [2024-12-06 13:10:48.619013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69568 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69568 ']' 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69568 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69568 00:16:01.694 killing process with pid 69568 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69568' 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69568 00:16:01.694 13:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69568 00:16:01.694 [2024-12-06 13:10:48.660279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.951 [2024-12-06 13:10:48.917614] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.327 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JiBQf80YDJ 00:16:03.327 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:03.327 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:03.327 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:03.327 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:03.327 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:03.327 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:03.327 13:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:03.327 00:16:03.327 real 0m5.099s 00:16:03.327 user 0m6.175s 00:16:03.327 sys 0m0.708s 00:16:03.327 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.327 13:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.327 ************************************ 00:16:03.327 END TEST raid_write_error_test 00:16:03.327 ************************************ 00:16:03.327 13:10:50 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:16:03.327 13:10:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:03.327 13:10:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:16:03.327 13:10:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:03.327 13:10:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.327 13:10:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.327 ************************************ 00:16:03.327 START TEST raid_state_function_test 00:16:03.327 ************************************ 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:03.327 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:03.586 Process raid pid: 69713 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69713 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69713' 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69713 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69713 ']' 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.586 13:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.586 [2024-12-06 13:10:50.452896] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:16:03.586 [2024-12-06 13:10:50.453104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.844 [2024-12-06 13:10:50.643917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.844 [2024-12-06 13:10:50.795046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.102 [2024-12-06 13:10:51.026791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.102 [2024-12-06 13:10:51.026875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.669 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.669 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:04.669 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:04.669 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.669 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.669 [2024-12-06 13:10:51.384335] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.669 [2024-12-06 13:10:51.384452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.669 [2024-12-06 13:10:51.384489] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.669 [2024-12-06 13:10:51.384513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.669 [2024-12-06 13:10:51.384526] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.669 [2024-12-06 13:10:51.384543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.669 [2024-12-06 13:10:51.384555] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:04.670 [2024-12-06 13:10:51.384573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.670 "name": "Existed_Raid", 00:16:04.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.670 "strip_size_kb": 64, 00:16:04.670 "state": "configuring", 00:16:04.670 "raid_level": "raid0", 00:16:04.670 "superblock": false, 00:16:04.670 "num_base_bdevs": 4, 00:16:04.670 "num_base_bdevs_discovered": 0, 00:16:04.670 "num_base_bdevs_operational": 4, 00:16:04.670 "base_bdevs_list": [ 00:16:04.670 { 00:16:04.670 "name": "BaseBdev1", 00:16:04.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.670 "is_configured": false, 00:16:04.670 "data_offset": 0, 00:16:04.670 "data_size": 0 00:16:04.670 }, 00:16:04.670 { 00:16:04.670 "name": "BaseBdev2", 00:16:04.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.670 "is_configured": false, 00:16:04.670 "data_offset": 0, 00:16:04.670 "data_size": 0 00:16:04.670 }, 00:16:04.670 { 00:16:04.670 "name": "BaseBdev3", 00:16:04.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.670 "is_configured": false, 00:16:04.670 "data_offset": 0, 00:16:04.670 "data_size": 0 00:16:04.670 }, 00:16:04.670 { 00:16:04.670 "name": "BaseBdev4", 00:16:04.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.670 "is_configured": false, 00:16:04.670 "data_offset": 0, 00:16:04.670 "data_size": 0 00:16:04.670 } 00:16:04.670 ] 00:16:04.670 }' 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.670 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.929 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:04.929 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.929 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.929 [2024-12-06 13:10:51.904556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.929 [2024-12-06 13:10:51.904902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:04.929 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.929 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:04.929 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.929 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.929 [2024-12-06 13:10:51.912485] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.929 [2024-12-06 13:10:51.912594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.929 [2024-12-06 13:10:51.912615] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.929 [2024-12-06 13:10:51.912636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.929 [2024-12-06 13:10:51.912649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.929 [2024-12-06 13:10:51.912667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.929 [2024-12-06 13:10:51.912680] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:04.929 [2024-12-06 13:10:51.912697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:04.929 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.929 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:04.929 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.929 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.187 [2024-12-06 13:10:51.959921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.187 BaseBdev1 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.187 [ 00:16:05.187 { 00:16:05.187 "name": "BaseBdev1", 00:16:05.187 "aliases": [ 00:16:05.187 "412ae116-f6ae-48c6-a2c4-3e5b65b89228" 00:16:05.187 ], 00:16:05.187 "product_name": "Malloc disk", 00:16:05.187 "block_size": 512, 00:16:05.187 "num_blocks": 65536, 00:16:05.187 "uuid": "412ae116-f6ae-48c6-a2c4-3e5b65b89228", 00:16:05.187 "assigned_rate_limits": { 00:16:05.187 "rw_ios_per_sec": 0, 00:16:05.187 "rw_mbytes_per_sec": 0, 00:16:05.187 "r_mbytes_per_sec": 0, 00:16:05.187 "w_mbytes_per_sec": 0 00:16:05.187 }, 00:16:05.187 "claimed": true, 00:16:05.187 "claim_type": "exclusive_write", 00:16:05.187 "zoned": false, 00:16:05.187 "supported_io_types": { 00:16:05.187 "read": true, 00:16:05.187 "write": true, 00:16:05.187 "unmap": true, 00:16:05.187 "flush": true, 00:16:05.187 "reset": true, 00:16:05.187 "nvme_admin": false, 00:16:05.187 "nvme_io": false, 00:16:05.187 "nvme_io_md": false, 00:16:05.187 "write_zeroes": true, 00:16:05.187 "zcopy": true, 00:16:05.187 "get_zone_info": false, 00:16:05.187 "zone_management": false, 00:16:05.187 "zone_append": false, 00:16:05.187 "compare": false, 00:16:05.187 "compare_and_write": false, 00:16:05.187 "abort": true, 00:16:05.187 "seek_hole": false, 00:16:05.187 "seek_data": false, 00:16:05.187 "copy": true, 00:16:05.187 "nvme_iov_md": false 00:16:05.187 }, 00:16:05.187 "memory_domains": [ 00:16:05.187 { 00:16:05.187 "dma_device_id": "system", 00:16:05.187 "dma_device_type": 1 00:16:05.187 }, 00:16:05.187 { 00:16:05.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.187 "dma_device_type": 2 00:16:05.187 } 00:16:05.187 ], 00:16:05.187 "driver_specific": {} 00:16:05.187 } 00:16:05.187 ] 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.187 13:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.187 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.187 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.188 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.188 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.188 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.188 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.188 "name": "Existed_Raid", 00:16:05.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.188 "strip_size_kb": 64, 00:16:05.188 "state": "configuring", 00:16:05.188 "raid_level": "raid0", 00:16:05.188 "superblock": false, 00:16:05.188 "num_base_bdevs": 4, 00:16:05.188 "num_base_bdevs_discovered": 1, 00:16:05.188 "num_base_bdevs_operational": 4, 00:16:05.188 "base_bdevs_list": [ 00:16:05.188 { 00:16:05.188 "name": "BaseBdev1", 00:16:05.188 "uuid": "412ae116-f6ae-48c6-a2c4-3e5b65b89228", 00:16:05.188 "is_configured": true, 00:16:05.188 "data_offset": 0, 00:16:05.188 "data_size": 65536 00:16:05.188 }, 00:16:05.188 { 00:16:05.188 "name": "BaseBdev2", 00:16:05.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.188 "is_configured": false, 00:16:05.188 "data_offset": 0, 00:16:05.188 "data_size": 0 00:16:05.188 }, 00:16:05.188 { 00:16:05.188 "name": "BaseBdev3", 00:16:05.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.188 "is_configured": false, 00:16:05.188 "data_offset": 0, 00:16:05.188 "data_size": 0 00:16:05.188 }, 00:16:05.188 { 00:16:05.188 "name": "BaseBdev4", 00:16:05.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.188 "is_configured": false, 00:16:05.188 "data_offset": 0, 00:16:05.188 "data_size": 0 00:16:05.188 } 00:16:05.188 ] 00:16:05.188 }' 00:16:05.188 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.188 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.754 [2024-12-06 13:10:52.540214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.754 [2024-12-06 13:10:52.540307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.754 [2024-12-06 13:10:52.548209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.754 [2024-12-06 13:10:52.551414] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.754 [2024-12-06 13:10:52.551644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.754 [2024-12-06 13:10:52.551807] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.754 [2024-12-06 13:10:52.551882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.754 [2024-12-06 13:10:52.552125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:05.754 [2024-12-06 13:10:52.552205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.754 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.754 "name": "Existed_Raid", 00:16:05.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.754 "strip_size_kb": 64, 00:16:05.754 "state": "configuring", 00:16:05.754 "raid_level": "raid0", 00:16:05.755 "superblock": false, 00:16:05.755 "num_base_bdevs": 4, 00:16:05.755 "num_base_bdevs_discovered": 1, 00:16:05.755 "num_base_bdevs_operational": 4, 00:16:05.755 "base_bdevs_list": [ 00:16:05.755 { 00:16:05.755 "name": "BaseBdev1", 00:16:05.755 "uuid": "412ae116-f6ae-48c6-a2c4-3e5b65b89228", 00:16:05.755 "is_configured": true, 00:16:05.755 "data_offset": 0, 00:16:05.755 "data_size": 65536 00:16:05.755 }, 00:16:05.755 { 00:16:05.755 "name": "BaseBdev2", 00:16:05.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.755 "is_configured": false, 00:16:05.755 "data_offset": 0, 00:16:05.755 "data_size": 0 00:16:05.755 }, 00:16:05.755 { 00:16:05.755 "name": "BaseBdev3", 00:16:05.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.755 "is_configured": false, 00:16:05.755 "data_offset": 0, 00:16:05.755 "data_size": 0 00:16:05.755 }, 00:16:05.755 { 00:16:05.755 "name": "BaseBdev4", 00:16:05.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.755 "is_configured": false, 00:16:05.755 "data_offset": 0, 00:16:05.755 "data_size": 0 00:16:05.755 } 00:16:05.755 ] 00:16:05.755 }' 00:16:05.755 13:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.755 13:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.381 [2024-12-06 13:10:53.108889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.381 BaseBdev2 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.381 [ 00:16:06.381 { 00:16:06.381 "name": "BaseBdev2", 00:16:06.381 "aliases": [ 00:16:06.381 "0725b6f5-ba94-420f-9aec-32d6628816f5" 00:16:06.381 ], 00:16:06.381 "product_name": "Malloc disk", 00:16:06.381 "block_size": 512, 00:16:06.381 "num_blocks": 65536, 00:16:06.381 "uuid": "0725b6f5-ba94-420f-9aec-32d6628816f5", 00:16:06.381 "assigned_rate_limits": { 00:16:06.381 "rw_ios_per_sec": 0, 00:16:06.381 "rw_mbytes_per_sec": 0, 00:16:06.381 "r_mbytes_per_sec": 0, 00:16:06.381 "w_mbytes_per_sec": 0 00:16:06.381 }, 00:16:06.381 "claimed": true, 00:16:06.381 "claim_type": "exclusive_write", 00:16:06.381 "zoned": false, 00:16:06.381 "supported_io_types": { 00:16:06.381 "read": true, 00:16:06.381 "write": true, 00:16:06.381 "unmap": true, 00:16:06.381 "flush": true, 00:16:06.381 "reset": true, 00:16:06.381 "nvme_admin": false, 00:16:06.381 "nvme_io": false, 00:16:06.381 "nvme_io_md": false, 00:16:06.381 "write_zeroes": true, 00:16:06.381 "zcopy": true, 00:16:06.381 "get_zone_info": false, 00:16:06.381 "zone_management": false, 00:16:06.381 "zone_append": false, 00:16:06.381 "compare": false, 00:16:06.381 "compare_and_write": false, 00:16:06.381 "abort": true, 00:16:06.381 "seek_hole": false, 00:16:06.381 "seek_data": false, 00:16:06.381 "copy": true, 00:16:06.381 "nvme_iov_md": false 00:16:06.381 }, 00:16:06.381 "memory_domains": [ 00:16:06.381 { 00:16:06.381 "dma_device_id": "system", 00:16:06.381 "dma_device_type": 1 00:16:06.381 }, 00:16:06.381 { 00:16:06.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.381 "dma_device_type": 2 00:16:06.381 } 00:16:06.381 ], 00:16:06.381 "driver_specific": {} 00:16:06.381 } 00:16:06.381 ] 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.381 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.382 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.382 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.382 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.382 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.382 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.382 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.382 "name": "Existed_Raid", 00:16:06.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.382 "strip_size_kb": 64, 00:16:06.382 "state": "configuring", 00:16:06.382 "raid_level": "raid0", 00:16:06.382 "superblock": false, 00:16:06.382 "num_base_bdevs": 4, 00:16:06.382 "num_base_bdevs_discovered": 2, 00:16:06.382 "num_base_bdevs_operational": 4, 00:16:06.382 "base_bdevs_list": [ 00:16:06.382 { 00:16:06.382 "name": "BaseBdev1", 00:16:06.382 "uuid": "412ae116-f6ae-48c6-a2c4-3e5b65b89228", 00:16:06.382 "is_configured": true, 00:16:06.382 "data_offset": 0, 00:16:06.382 "data_size": 65536 00:16:06.382 }, 00:16:06.382 { 00:16:06.382 "name": "BaseBdev2", 00:16:06.382 "uuid": "0725b6f5-ba94-420f-9aec-32d6628816f5", 00:16:06.382 "is_configured": true, 00:16:06.382 "data_offset": 0, 00:16:06.382 "data_size": 65536 00:16:06.382 }, 00:16:06.382 { 00:16:06.382 "name": "BaseBdev3", 00:16:06.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.382 "is_configured": false, 00:16:06.382 "data_offset": 0, 00:16:06.382 "data_size": 0 00:16:06.382 }, 00:16:06.382 { 00:16:06.382 "name": "BaseBdev4", 00:16:06.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.382 "is_configured": false, 00:16:06.382 "data_offset": 0, 00:16:06.382 "data_size": 0 00:16:06.382 } 00:16:06.382 ] 00:16:06.382 }' 00:16:06.382 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.382 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.949 [2024-12-06 13:10:53.724757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.949 BaseBdev3 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.949 [ 00:16:06.949 { 00:16:06.949 "name": "BaseBdev3", 00:16:06.949 "aliases": [ 00:16:06.949 "8da2d974-018c-4923-a8c3-64fe407a7c63" 00:16:06.949 ], 00:16:06.949 "product_name": "Malloc disk", 00:16:06.949 "block_size": 512, 00:16:06.949 "num_blocks": 65536, 00:16:06.949 "uuid": "8da2d974-018c-4923-a8c3-64fe407a7c63", 00:16:06.949 "assigned_rate_limits": { 00:16:06.949 "rw_ios_per_sec": 0, 00:16:06.949 "rw_mbytes_per_sec": 0, 00:16:06.949 "r_mbytes_per_sec": 0, 00:16:06.949 "w_mbytes_per_sec": 0 00:16:06.949 }, 00:16:06.949 "claimed": true, 00:16:06.949 "claim_type": "exclusive_write", 00:16:06.949 "zoned": false, 00:16:06.949 "supported_io_types": { 00:16:06.949 "read": true, 00:16:06.949 "write": true, 00:16:06.949 "unmap": true, 00:16:06.949 "flush": true, 00:16:06.949 "reset": true, 00:16:06.949 "nvme_admin": false, 00:16:06.949 "nvme_io": false, 00:16:06.949 "nvme_io_md": false, 00:16:06.949 "write_zeroes": true, 00:16:06.949 "zcopy": true, 00:16:06.949 "get_zone_info": false, 00:16:06.949 "zone_management": false, 00:16:06.949 "zone_append": false, 00:16:06.949 "compare": false, 00:16:06.949 "compare_and_write": false, 00:16:06.949 "abort": true, 00:16:06.949 "seek_hole": false, 00:16:06.949 "seek_data": false, 00:16:06.949 "copy": true, 00:16:06.949 "nvme_iov_md": false 00:16:06.949 }, 00:16:06.949 "memory_domains": [ 00:16:06.949 { 00:16:06.949 "dma_device_id": "system", 00:16:06.949 "dma_device_type": 1 00:16:06.949 }, 00:16:06.949 { 00:16:06.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.949 "dma_device_type": 2 00:16:06.949 } 00:16:06.949 ], 00:16:06.949 "driver_specific": {} 00:16:06.949 } 00:16:06.949 ] 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.949 "name": "Existed_Raid", 00:16:06.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.949 "strip_size_kb": 64, 00:16:06.949 "state": "configuring", 00:16:06.949 "raid_level": "raid0", 00:16:06.949 "superblock": false, 00:16:06.949 "num_base_bdevs": 4, 00:16:06.949 "num_base_bdevs_discovered": 3, 00:16:06.949 "num_base_bdevs_operational": 4, 00:16:06.949 "base_bdevs_list": [ 00:16:06.949 { 00:16:06.949 "name": "BaseBdev1", 00:16:06.949 "uuid": "412ae116-f6ae-48c6-a2c4-3e5b65b89228", 00:16:06.949 "is_configured": true, 00:16:06.949 "data_offset": 0, 00:16:06.949 "data_size": 65536 00:16:06.949 }, 00:16:06.949 { 00:16:06.949 "name": "BaseBdev2", 00:16:06.949 "uuid": "0725b6f5-ba94-420f-9aec-32d6628816f5", 00:16:06.949 "is_configured": true, 00:16:06.949 "data_offset": 0, 00:16:06.949 "data_size": 65536 00:16:06.949 }, 00:16:06.949 { 00:16:06.949 "name": "BaseBdev3", 00:16:06.949 "uuid": "8da2d974-018c-4923-a8c3-64fe407a7c63", 00:16:06.949 "is_configured": true, 00:16:06.949 "data_offset": 0, 00:16:06.949 "data_size": 65536 00:16:06.949 }, 00:16:06.949 { 00:16:06.949 "name": "BaseBdev4", 00:16:06.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.949 "is_configured": false, 00:16:06.949 "data_offset": 0, 00:16:06.949 "data_size": 0 00:16:06.949 } 00:16:06.949 ] 00:16:06.949 }' 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.949 13:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.514 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:07.514 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.514 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.514 [2024-12-06 13:10:54.344577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:07.514 [2024-12-06 13:10:54.344665] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:07.514 [2024-12-06 13:10:54.344684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:07.514 [2024-12-06 13:10:54.345095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:07.514 [2024-12-06 13:10:54.345386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:07.514 [2024-12-06 13:10:54.345411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:07.514 [2024-12-06 13:10:54.345835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.514 BaseBdev4 00:16:07.514 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.514 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:07.514 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:07.514 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.515 [ 00:16:07.515 { 00:16:07.515 "name": "BaseBdev4", 00:16:07.515 "aliases": [ 00:16:07.515 "6e1ff557-fec6-4b8f-b46e-770ad5f9272b" 00:16:07.515 ], 00:16:07.515 "product_name": "Malloc disk", 00:16:07.515 "block_size": 512, 00:16:07.515 "num_blocks": 65536, 00:16:07.515 "uuid": "6e1ff557-fec6-4b8f-b46e-770ad5f9272b", 00:16:07.515 "assigned_rate_limits": { 00:16:07.515 "rw_ios_per_sec": 0, 00:16:07.515 "rw_mbytes_per_sec": 0, 00:16:07.515 "r_mbytes_per_sec": 0, 00:16:07.515 "w_mbytes_per_sec": 0 00:16:07.515 }, 00:16:07.515 "claimed": true, 00:16:07.515 "claim_type": "exclusive_write", 00:16:07.515 "zoned": false, 00:16:07.515 "supported_io_types": { 00:16:07.515 "read": true, 00:16:07.515 "write": true, 00:16:07.515 "unmap": true, 00:16:07.515 "flush": true, 00:16:07.515 "reset": true, 00:16:07.515 "nvme_admin": false, 00:16:07.515 "nvme_io": false, 00:16:07.515 "nvme_io_md": false, 00:16:07.515 "write_zeroes": true, 00:16:07.515 "zcopy": true, 00:16:07.515 "get_zone_info": false, 00:16:07.515 "zone_management": false, 00:16:07.515 "zone_append": false, 00:16:07.515 "compare": false, 00:16:07.515 "compare_and_write": false, 00:16:07.515 "abort": true, 00:16:07.515 "seek_hole": false, 00:16:07.515 "seek_data": false, 00:16:07.515 "copy": true, 00:16:07.515 "nvme_iov_md": false 00:16:07.515 }, 00:16:07.515 "memory_domains": [ 00:16:07.515 { 00:16:07.515 "dma_device_id": "system", 00:16:07.515 "dma_device_type": 1 00:16:07.515 }, 00:16:07.515 { 00:16:07.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.515 "dma_device_type": 2 00:16:07.515 } 00:16:07.515 ], 00:16:07.515 "driver_specific": {} 00:16:07.515 } 00:16:07.515 ] 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.515 "name": "Existed_Raid", 00:16:07.515 "uuid": "0612ae6c-4767-4e89-a7f8-901de50046a3", 00:16:07.515 "strip_size_kb": 64, 00:16:07.515 "state": "online", 00:16:07.515 "raid_level": "raid0", 00:16:07.515 "superblock": false, 00:16:07.515 "num_base_bdevs": 4, 00:16:07.515 "num_base_bdevs_discovered": 4, 00:16:07.515 "num_base_bdevs_operational": 4, 00:16:07.515 "base_bdevs_list": [ 00:16:07.515 { 00:16:07.515 "name": "BaseBdev1", 00:16:07.515 "uuid": "412ae116-f6ae-48c6-a2c4-3e5b65b89228", 00:16:07.515 "is_configured": true, 00:16:07.515 "data_offset": 0, 00:16:07.515 "data_size": 65536 00:16:07.515 }, 00:16:07.515 { 00:16:07.515 "name": "BaseBdev2", 00:16:07.515 "uuid": "0725b6f5-ba94-420f-9aec-32d6628816f5", 00:16:07.515 "is_configured": true, 00:16:07.515 "data_offset": 0, 00:16:07.515 "data_size": 65536 00:16:07.515 }, 00:16:07.515 { 00:16:07.515 "name": "BaseBdev3", 00:16:07.515 "uuid": "8da2d974-018c-4923-a8c3-64fe407a7c63", 00:16:07.515 "is_configured": true, 00:16:07.515 "data_offset": 0, 00:16:07.515 "data_size": 65536 00:16:07.515 }, 00:16:07.515 { 00:16:07.515 "name": "BaseBdev4", 00:16:07.515 "uuid": "6e1ff557-fec6-4b8f-b46e-770ad5f9272b", 00:16:07.515 "is_configured": true, 00:16:07.515 "data_offset": 0, 00:16:07.515 "data_size": 65536 00:16:07.515 } 00:16:07.515 ] 00:16:07.515 }' 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.515 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:08.081 [2024-12-06 13:10:54.913384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.081 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:08.081 "name": "Existed_Raid", 00:16:08.081 "aliases": [ 00:16:08.081 "0612ae6c-4767-4e89-a7f8-901de50046a3" 00:16:08.081 ], 00:16:08.081 "product_name": "Raid Volume", 00:16:08.081 "block_size": 512, 00:16:08.081 "num_blocks": 262144, 00:16:08.081 "uuid": "0612ae6c-4767-4e89-a7f8-901de50046a3", 00:16:08.081 "assigned_rate_limits": { 00:16:08.081 "rw_ios_per_sec": 0, 00:16:08.081 "rw_mbytes_per_sec": 0, 00:16:08.081 "r_mbytes_per_sec": 0, 00:16:08.081 "w_mbytes_per_sec": 0 00:16:08.081 }, 00:16:08.081 "claimed": false, 00:16:08.081 "zoned": false, 00:16:08.082 "supported_io_types": { 00:16:08.082 "read": true, 00:16:08.082 "write": true, 00:16:08.082 "unmap": true, 00:16:08.082 "flush": true, 00:16:08.082 "reset": true, 00:16:08.082 "nvme_admin": false, 00:16:08.082 "nvme_io": false, 00:16:08.082 "nvme_io_md": false, 00:16:08.082 "write_zeroes": true, 00:16:08.082 "zcopy": false, 00:16:08.082 "get_zone_info": false, 00:16:08.082 "zone_management": false, 00:16:08.082 "zone_append": false, 00:16:08.082 "compare": false, 00:16:08.082 "compare_and_write": false, 00:16:08.082 "abort": false, 00:16:08.082 "seek_hole": false, 00:16:08.082 "seek_data": false, 00:16:08.082 "copy": false, 00:16:08.082 "nvme_iov_md": false 00:16:08.082 }, 00:16:08.082 "memory_domains": [ 00:16:08.082 { 00:16:08.082 "dma_device_id": "system", 00:16:08.082 "dma_device_type": 1 00:16:08.082 }, 00:16:08.082 { 00:16:08.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.082 "dma_device_type": 2 00:16:08.082 }, 00:16:08.082 { 00:16:08.082 "dma_device_id": "system", 00:16:08.082 "dma_device_type": 1 00:16:08.082 }, 00:16:08.082 { 00:16:08.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.082 "dma_device_type": 2 00:16:08.082 }, 00:16:08.082 { 00:16:08.082 "dma_device_id": "system", 00:16:08.082 "dma_device_type": 1 00:16:08.082 }, 00:16:08.082 { 00:16:08.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.082 "dma_device_type": 2 00:16:08.082 }, 00:16:08.082 { 00:16:08.082 "dma_device_id": "system", 00:16:08.082 "dma_device_type": 1 00:16:08.082 }, 00:16:08.082 { 00:16:08.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.082 "dma_device_type": 2 00:16:08.082 } 00:16:08.082 ], 00:16:08.082 "driver_specific": { 00:16:08.082 "raid": { 00:16:08.082 "uuid": "0612ae6c-4767-4e89-a7f8-901de50046a3", 00:16:08.082 "strip_size_kb": 64, 00:16:08.082 "state": "online", 00:16:08.082 "raid_level": "raid0", 00:16:08.082 "superblock": false, 00:16:08.082 "num_base_bdevs": 4, 00:16:08.082 "num_base_bdevs_discovered": 4, 00:16:08.082 "num_base_bdevs_operational": 4, 00:16:08.082 "base_bdevs_list": [ 00:16:08.082 { 00:16:08.082 "name": "BaseBdev1", 00:16:08.082 "uuid": "412ae116-f6ae-48c6-a2c4-3e5b65b89228", 00:16:08.082 "is_configured": true, 00:16:08.082 "data_offset": 0, 00:16:08.082 "data_size": 65536 00:16:08.082 }, 00:16:08.082 { 00:16:08.082 "name": "BaseBdev2", 00:16:08.082 "uuid": "0725b6f5-ba94-420f-9aec-32d6628816f5", 00:16:08.082 "is_configured": true, 00:16:08.082 "data_offset": 0, 00:16:08.082 "data_size": 65536 00:16:08.082 }, 00:16:08.082 { 00:16:08.082 "name": "BaseBdev3", 00:16:08.082 "uuid": "8da2d974-018c-4923-a8c3-64fe407a7c63", 00:16:08.082 "is_configured": true, 00:16:08.082 "data_offset": 0, 00:16:08.082 "data_size": 65536 00:16:08.082 }, 00:16:08.082 { 00:16:08.082 "name": "BaseBdev4", 00:16:08.082 "uuid": "6e1ff557-fec6-4b8f-b46e-770ad5f9272b", 00:16:08.082 "is_configured": true, 00:16:08.082 "data_offset": 0, 00:16:08.082 "data_size": 65536 00:16:08.082 } 00:16:08.082 ] 00:16:08.082 } 00:16:08.082 } 00:16:08.082 }' 00:16:08.082 13:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.082 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:08.082 BaseBdev2 00:16:08.082 BaseBdev3 00:16:08.082 BaseBdev4' 00:16:08.082 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.082 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:08.082 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.082 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:08.082 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.082 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.082 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.082 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.341 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.341 [2024-12-06 13:10:55.277143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.341 [2024-12-06 13:10:55.277230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.341 [2024-12-06 13:10:55.277315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.599 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.599 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:08.599 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:16:08.599 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:08.599 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.600 "name": "Existed_Raid", 00:16:08.600 "uuid": "0612ae6c-4767-4e89-a7f8-901de50046a3", 00:16:08.600 "strip_size_kb": 64, 00:16:08.600 "state": "offline", 00:16:08.600 "raid_level": "raid0", 00:16:08.600 "superblock": false, 00:16:08.600 "num_base_bdevs": 4, 00:16:08.600 "num_base_bdevs_discovered": 3, 00:16:08.600 "num_base_bdevs_operational": 3, 00:16:08.600 "base_bdevs_list": [ 00:16:08.600 { 00:16:08.600 "name": null, 00:16:08.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.600 "is_configured": false, 00:16:08.600 "data_offset": 0, 00:16:08.600 "data_size": 65536 00:16:08.600 }, 00:16:08.600 { 00:16:08.600 "name": "BaseBdev2", 00:16:08.600 "uuid": "0725b6f5-ba94-420f-9aec-32d6628816f5", 00:16:08.600 "is_configured": true, 00:16:08.600 "data_offset": 0, 00:16:08.600 "data_size": 65536 00:16:08.600 }, 00:16:08.600 { 00:16:08.600 "name": "BaseBdev3", 00:16:08.600 "uuid": "8da2d974-018c-4923-a8c3-64fe407a7c63", 00:16:08.600 "is_configured": true, 00:16:08.600 "data_offset": 0, 00:16:08.600 "data_size": 65536 00:16:08.600 }, 00:16:08.600 { 00:16:08.600 "name": "BaseBdev4", 00:16:08.600 "uuid": "6e1ff557-fec6-4b8f-b46e-770ad5f9272b", 00:16:08.600 "is_configured": true, 00:16:08.600 "data_offset": 0, 00:16:08.600 "data_size": 65536 00:16:08.600 } 00:16:08.600 ] 00:16:08.600 }' 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.600 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.229 13:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.229 [2024-12-06 13:10:55.976209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.229 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.229 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.229 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.229 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.229 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.229 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.229 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.229 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.229 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.229 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.229 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:09.230 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.230 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.230 [2024-12-06 13:10:56.135330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.489 [2024-12-06 13:10:56.289134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:09.489 [2024-12-06 13:10:56.289229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.489 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.748 BaseBdev2 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.749 [ 00:16:09.749 { 00:16:09.749 "name": "BaseBdev2", 00:16:09.749 "aliases": [ 00:16:09.749 "8a07d1a2-7efc-466f-a622-fe8f668982b8" 00:16:09.749 ], 00:16:09.749 "product_name": "Malloc disk", 00:16:09.749 "block_size": 512, 00:16:09.749 "num_blocks": 65536, 00:16:09.749 "uuid": "8a07d1a2-7efc-466f-a622-fe8f668982b8", 00:16:09.749 "assigned_rate_limits": { 00:16:09.749 "rw_ios_per_sec": 0, 00:16:09.749 "rw_mbytes_per_sec": 0, 00:16:09.749 "r_mbytes_per_sec": 0, 00:16:09.749 "w_mbytes_per_sec": 0 00:16:09.749 }, 00:16:09.749 "claimed": false, 00:16:09.749 "zoned": false, 00:16:09.749 "supported_io_types": { 00:16:09.749 "read": true, 00:16:09.749 "write": true, 00:16:09.749 "unmap": true, 00:16:09.749 "flush": true, 00:16:09.749 "reset": true, 00:16:09.749 "nvme_admin": false, 00:16:09.749 "nvme_io": false, 00:16:09.749 "nvme_io_md": false, 00:16:09.749 "write_zeroes": true, 00:16:09.749 "zcopy": true, 00:16:09.749 "get_zone_info": false, 00:16:09.749 "zone_management": false, 00:16:09.749 "zone_append": false, 00:16:09.749 "compare": false, 00:16:09.749 "compare_and_write": false, 00:16:09.749 "abort": true, 00:16:09.749 "seek_hole": false, 00:16:09.749 "seek_data": false, 00:16:09.749 "copy": true, 00:16:09.749 "nvme_iov_md": false 00:16:09.749 }, 00:16:09.749 "memory_domains": [ 00:16:09.749 { 00:16:09.749 "dma_device_id": "system", 00:16:09.749 "dma_device_type": 1 00:16:09.749 }, 00:16:09.749 { 00:16:09.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.749 "dma_device_type": 2 00:16:09.749 } 00:16:09.749 ], 00:16:09.749 "driver_specific": {} 00:16:09.749 } 00:16:09.749 ] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.749 BaseBdev3 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.749 [ 00:16:09.749 { 00:16:09.749 "name": "BaseBdev3", 00:16:09.749 "aliases": [ 00:16:09.749 "3e77e072-f9a3-484d-9ade-c3d1b36ee072" 00:16:09.749 ], 00:16:09.749 "product_name": "Malloc disk", 00:16:09.749 "block_size": 512, 00:16:09.749 "num_blocks": 65536, 00:16:09.749 "uuid": "3e77e072-f9a3-484d-9ade-c3d1b36ee072", 00:16:09.749 "assigned_rate_limits": { 00:16:09.749 "rw_ios_per_sec": 0, 00:16:09.749 "rw_mbytes_per_sec": 0, 00:16:09.749 "r_mbytes_per_sec": 0, 00:16:09.749 "w_mbytes_per_sec": 0 00:16:09.749 }, 00:16:09.749 "claimed": false, 00:16:09.749 "zoned": false, 00:16:09.749 "supported_io_types": { 00:16:09.749 "read": true, 00:16:09.749 "write": true, 00:16:09.749 "unmap": true, 00:16:09.749 "flush": true, 00:16:09.749 "reset": true, 00:16:09.749 "nvme_admin": false, 00:16:09.749 "nvme_io": false, 00:16:09.749 "nvme_io_md": false, 00:16:09.749 "write_zeroes": true, 00:16:09.749 "zcopy": true, 00:16:09.749 "get_zone_info": false, 00:16:09.749 "zone_management": false, 00:16:09.749 "zone_append": false, 00:16:09.749 "compare": false, 00:16:09.749 "compare_and_write": false, 00:16:09.749 "abort": true, 00:16:09.749 "seek_hole": false, 00:16:09.749 "seek_data": false, 00:16:09.749 "copy": true, 00:16:09.749 "nvme_iov_md": false 00:16:09.749 }, 00:16:09.749 "memory_domains": [ 00:16:09.749 { 00:16:09.749 "dma_device_id": "system", 00:16:09.749 "dma_device_type": 1 00:16:09.749 }, 00:16:09.749 { 00:16:09.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.749 "dma_device_type": 2 00:16:09.749 } 00:16:09.749 ], 00:16:09.749 "driver_specific": {} 00:16:09.749 } 00:16:09.749 ] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.749 BaseBdev4 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.749 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.749 [ 00:16:09.749 { 00:16:09.749 "name": "BaseBdev4", 00:16:09.749 "aliases": [ 00:16:09.749 "6c50f130-2b06-4619-8478-fbb974f6d1ce" 00:16:09.749 ], 00:16:09.749 "product_name": "Malloc disk", 00:16:09.749 "block_size": 512, 00:16:09.749 "num_blocks": 65536, 00:16:09.749 "uuid": "6c50f130-2b06-4619-8478-fbb974f6d1ce", 00:16:09.749 "assigned_rate_limits": { 00:16:09.749 "rw_ios_per_sec": 0, 00:16:09.749 "rw_mbytes_per_sec": 0, 00:16:09.749 "r_mbytes_per_sec": 0, 00:16:09.749 "w_mbytes_per_sec": 0 00:16:09.749 }, 00:16:09.749 "claimed": false, 00:16:09.749 "zoned": false, 00:16:09.749 "supported_io_types": { 00:16:09.749 "read": true, 00:16:09.749 "write": true, 00:16:09.749 "unmap": true, 00:16:09.749 "flush": true, 00:16:09.749 "reset": true, 00:16:09.749 "nvme_admin": false, 00:16:09.749 "nvme_io": false, 00:16:09.749 "nvme_io_md": false, 00:16:09.749 "write_zeroes": true, 00:16:09.749 "zcopy": true, 00:16:09.749 "get_zone_info": false, 00:16:09.749 "zone_management": false, 00:16:09.749 "zone_append": false, 00:16:09.750 "compare": false, 00:16:09.750 "compare_and_write": false, 00:16:09.750 "abort": true, 00:16:09.750 "seek_hole": false, 00:16:09.750 "seek_data": false, 00:16:09.750 "copy": true, 00:16:09.750 "nvme_iov_md": false 00:16:09.750 }, 00:16:09.750 "memory_domains": [ 00:16:09.750 { 00:16:09.750 "dma_device_id": "system", 00:16:09.750 "dma_device_type": 1 00:16:09.750 }, 00:16:09.750 { 00:16:09.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.750 "dma_device_type": 2 00:16:09.750 } 00:16:09.750 ], 00:16:09.750 "driver_specific": {} 00:16:09.750 } 00:16:09.750 ] 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.750 [2024-12-06 13:10:56.707261] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.750 [2024-12-06 13:10:56.707348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.750 [2024-12-06 13:10:56.707397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.750 [2024-12-06 13:10:56.710592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.750 [2024-12-06 13:10:56.710730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.750 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.008 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.008 "name": "Existed_Raid", 00:16:10.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.008 "strip_size_kb": 64, 00:16:10.008 "state": "configuring", 00:16:10.008 "raid_level": "raid0", 00:16:10.008 "superblock": false, 00:16:10.008 "num_base_bdevs": 4, 00:16:10.008 "num_base_bdevs_discovered": 3, 00:16:10.008 "num_base_bdevs_operational": 4, 00:16:10.008 "base_bdevs_list": [ 00:16:10.008 { 00:16:10.008 "name": "BaseBdev1", 00:16:10.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.008 "is_configured": false, 00:16:10.008 "data_offset": 0, 00:16:10.008 "data_size": 0 00:16:10.008 }, 00:16:10.008 { 00:16:10.008 "name": "BaseBdev2", 00:16:10.008 "uuid": "8a07d1a2-7efc-466f-a622-fe8f668982b8", 00:16:10.008 "is_configured": true, 00:16:10.008 "data_offset": 0, 00:16:10.008 "data_size": 65536 00:16:10.008 }, 00:16:10.008 { 00:16:10.008 "name": "BaseBdev3", 00:16:10.008 "uuid": "3e77e072-f9a3-484d-9ade-c3d1b36ee072", 00:16:10.008 "is_configured": true, 00:16:10.008 "data_offset": 0, 00:16:10.008 "data_size": 65536 00:16:10.008 }, 00:16:10.008 { 00:16:10.008 "name": "BaseBdev4", 00:16:10.008 "uuid": "6c50f130-2b06-4619-8478-fbb974f6d1ce", 00:16:10.008 "is_configured": true, 00:16:10.008 "data_offset": 0, 00:16:10.008 "data_size": 65536 00:16:10.008 } 00:16:10.008 ] 00:16:10.008 }' 00:16:10.008 13:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.008 13:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.267 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:10.267 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.267 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.267 [2024-12-06 13:10:57.275503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:10.267 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.267 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:10.267 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.267 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.525 "name": "Existed_Raid", 00:16:10.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.525 "strip_size_kb": 64, 00:16:10.525 "state": "configuring", 00:16:10.525 "raid_level": "raid0", 00:16:10.525 "superblock": false, 00:16:10.525 "num_base_bdevs": 4, 00:16:10.525 "num_base_bdevs_discovered": 2, 00:16:10.525 "num_base_bdevs_operational": 4, 00:16:10.525 "base_bdevs_list": [ 00:16:10.525 { 00:16:10.525 "name": "BaseBdev1", 00:16:10.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.525 "is_configured": false, 00:16:10.525 "data_offset": 0, 00:16:10.525 "data_size": 0 00:16:10.525 }, 00:16:10.525 { 00:16:10.525 "name": null, 00:16:10.525 "uuid": "8a07d1a2-7efc-466f-a622-fe8f668982b8", 00:16:10.525 "is_configured": false, 00:16:10.525 "data_offset": 0, 00:16:10.525 "data_size": 65536 00:16:10.525 }, 00:16:10.525 { 00:16:10.525 "name": "BaseBdev3", 00:16:10.525 "uuid": "3e77e072-f9a3-484d-9ade-c3d1b36ee072", 00:16:10.525 "is_configured": true, 00:16:10.525 "data_offset": 0, 00:16:10.525 "data_size": 65536 00:16:10.525 }, 00:16:10.525 { 00:16:10.525 "name": "BaseBdev4", 00:16:10.525 "uuid": "6c50f130-2b06-4619-8478-fbb974f6d1ce", 00:16:10.525 "is_configured": true, 00:16:10.525 "data_offset": 0, 00:16:10.525 "data_size": 65536 00:16:10.525 } 00:16:10.525 ] 00:16:10.525 }' 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.525 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.093 [2024-12-06 13:10:57.926947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.093 BaseBdev1 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.093 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.093 [ 00:16:11.093 { 00:16:11.093 "name": "BaseBdev1", 00:16:11.093 "aliases": [ 00:16:11.093 "e89c67fd-0827-4674-81ce-b63f7506f884" 00:16:11.093 ], 00:16:11.093 "product_name": "Malloc disk", 00:16:11.093 "block_size": 512, 00:16:11.093 "num_blocks": 65536, 00:16:11.093 "uuid": "e89c67fd-0827-4674-81ce-b63f7506f884", 00:16:11.093 "assigned_rate_limits": { 00:16:11.093 "rw_ios_per_sec": 0, 00:16:11.093 "rw_mbytes_per_sec": 0, 00:16:11.093 "r_mbytes_per_sec": 0, 00:16:11.093 "w_mbytes_per_sec": 0 00:16:11.093 }, 00:16:11.093 "claimed": true, 00:16:11.093 "claim_type": "exclusive_write", 00:16:11.093 "zoned": false, 00:16:11.093 "supported_io_types": { 00:16:11.093 "read": true, 00:16:11.093 "write": true, 00:16:11.093 "unmap": true, 00:16:11.093 "flush": true, 00:16:11.093 "reset": true, 00:16:11.093 "nvme_admin": false, 00:16:11.093 "nvme_io": false, 00:16:11.093 "nvme_io_md": false, 00:16:11.093 "write_zeroes": true, 00:16:11.093 "zcopy": true, 00:16:11.093 "get_zone_info": false, 00:16:11.093 "zone_management": false, 00:16:11.093 "zone_append": false, 00:16:11.093 "compare": false, 00:16:11.093 "compare_and_write": false, 00:16:11.093 "abort": true, 00:16:11.093 "seek_hole": false, 00:16:11.093 "seek_data": false, 00:16:11.093 "copy": true, 00:16:11.093 "nvme_iov_md": false 00:16:11.093 }, 00:16:11.093 "memory_domains": [ 00:16:11.093 { 00:16:11.093 "dma_device_id": "system", 00:16:11.093 "dma_device_type": 1 00:16:11.094 }, 00:16:11.094 { 00:16:11.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.094 "dma_device_type": 2 00:16:11.094 } 00:16:11.094 ], 00:16:11.094 "driver_specific": {} 00:16:11.094 } 00:16:11.094 ] 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.094 13:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.094 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.094 "name": "Existed_Raid", 00:16:11.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.094 "strip_size_kb": 64, 00:16:11.094 "state": "configuring", 00:16:11.094 "raid_level": "raid0", 00:16:11.094 "superblock": false, 00:16:11.094 "num_base_bdevs": 4, 00:16:11.094 "num_base_bdevs_discovered": 3, 00:16:11.094 "num_base_bdevs_operational": 4, 00:16:11.094 "base_bdevs_list": [ 00:16:11.094 { 00:16:11.094 "name": "BaseBdev1", 00:16:11.094 "uuid": "e89c67fd-0827-4674-81ce-b63f7506f884", 00:16:11.094 "is_configured": true, 00:16:11.094 "data_offset": 0, 00:16:11.094 "data_size": 65536 00:16:11.094 }, 00:16:11.094 { 00:16:11.094 "name": null, 00:16:11.094 "uuid": "8a07d1a2-7efc-466f-a622-fe8f668982b8", 00:16:11.094 "is_configured": false, 00:16:11.094 "data_offset": 0, 00:16:11.094 "data_size": 65536 00:16:11.094 }, 00:16:11.094 { 00:16:11.094 "name": "BaseBdev3", 00:16:11.094 "uuid": "3e77e072-f9a3-484d-9ade-c3d1b36ee072", 00:16:11.094 "is_configured": true, 00:16:11.094 "data_offset": 0, 00:16:11.094 "data_size": 65536 00:16:11.094 }, 00:16:11.094 { 00:16:11.094 "name": "BaseBdev4", 00:16:11.094 "uuid": "6c50f130-2b06-4619-8478-fbb974f6d1ce", 00:16:11.094 "is_configured": true, 00:16:11.094 "data_offset": 0, 00:16:11.094 "data_size": 65536 00:16:11.094 } 00:16:11.094 ] 00:16:11.094 }' 00:16:11.094 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.094 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.661 [2024-12-06 13:10:58.599438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.661 "name": "Existed_Raid", 00:16:11.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.661 "strip_size_kb": 64, 00:16:11.661 "state": "configuring", 00:16:11.661 "raid_level": "raid0", 00:16:11.661 "superblock": false, 00:16:11.661 "num_base_bdevs": 4, 00:16:11.661 "num_base_bdevs_discovered": 2, 00:16:11.661 "num_base_bdevs_operational": 4, 00:16:11.661 "base_bdevs_list": [ 00:16:11.661 { 00:16:11.661 "name": "BaseBdev1", 00:16:11.661 "uuid": "e89c67fd-0827-4674-81ce-b63f7506f884", 00:16:11.661 "is_configured": true, 00:16:11.661 "data_offset": 0, 00:16:11.661 "data_size": 65536 00:16:11.661 }, 00:16:11.661 { 00:16:11.661 "name": null, 00:16:11.661 "uuid": "8a07d1a2-7efc-466f-a622-fe8f668982b8", 00:16:11.661 "is_configured": false, 00:16:11.661 "data_offset": 0, 00:16:11.661 "data_size": 65536 00:16:11.661 }, 00:16:11.661 { 00:16:11.661 "name": null, 00:16:11.661 "uuid": "3e77e072-f9a3-484d-9ade-c3d1b36ee072", 00:16:11.661 "is_configured": false, 00:16:11.661 "data_offset": 0, 00:16:11.661 "data_size": 65536 00:16:11.661 }, 00:16:11.661 { 00:16:11.661 "name": "BaseBdev4", 00:16:11.661 "uuid": "6c50f130-2b06-4619-8478-fbb974f6d1ce", 00:16:11.661 "is_configured": true, 00:16:11.661 "data_offset": 0, 00:16:11.661 "data_size": 65536 00:16:11.661 } 00:16:11.661 ] 00:16:11.661 }' 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.661 13:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.228 [2024-12-06 13:10:59.187579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.228 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.487 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.487 "name": "Existed_Raid", 00:16:12.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.487 "strip_size_kb": 64, 00:16:12.487 "state": "configuring", 00:16:12.487 "raid_level": "raid0", 00:16:12.487 "superblock": false, 00:16:12.487 "num_base_bdevs": 4, 00:16:12.487 "num_base_bdevs_discovered": 3, 00:16:12.487 "num_base_bdevs_operational": 4, 00:16:12.487 "base_bdevs_list": [ 00:16:12.487 { 00:16:12.487 "name": "BaseBdev1", 00:16:12.487 "uuid": "e89c67fd-0827-4674-81ce-b63f7506f884", 00:16:12.487 "is_configured": true, 00:16:12.487 "data_offset": 0, 00:16:12.487 "data_size": 65536 00:16:12.487 }, 00:16:12.487 { 00:16:12.487 "name": null, 00:16:12.487 "uuid": "8a07d1a2-7efc-466f-a622-fe8f668982b8", 00:16:12.487 "is_configured": false, 00:16:12.487 "data_offset": 0, 00:16:12.487 "data_size": 65536 00:16:12.487 }, 00:16:12.487 { 00:16:12.487 "name": "BaseBdev3", 00:16:12.487 "uuid": "3e77e072-f9a3-484d-9ade-c3d1b36ee072", 00:16:12.487 "is_configured": true, 00:16:12.487 "data_offset": 0, 00:16:12.487 "data_size": 65536 00:16:12.487 }, 00:16:12.487 { 00:16:12.487 "name": "BaseBdev4", 00:16:12.487 "uuid": "6c50f130-2b06-4619-8478-fbb974f6d1ce", 00:16:12.487 "is_configured": true, 00:16:12.487 "data_offset": 0, 00:16:12.487 "data_size": 65536 00:16:12.487 } 00:16:12.487 ] 00:16:12.487 }' 00:16:12.487 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.487 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.747 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.747 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.006 [2024-12-06 13:10:59.815972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.006 "name": "Existed_Raid", 00:16:13.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.006 "strip_size_kb": 64, 00:16:13.006 "state": "configuring", 00:16:13.006 "raid_level": "raid0", 00:16:13.006 "superblock": false, 00:16:13.006 "num_base_bdevs": 4, 00:16:13.006 "num_base_bdevs_discovered": 2, 00:16:13.006 "num_base_bdevs_operational": 4, 00:16:13.006 "base_bdevs_list": [ 00:16:13.006 { 00:16:13.006 "name": null, 00:16:13.006 "uuid": "e89c67fd-0827-4674-81ce-b63f7506f884", 00:16:13.006 "is_configured": false, 00:16:13.006 "data_offset": 0, 00:16:13.006 "data_size": 65536 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "name": null, 00:16:13.006 "uuid": "8a07d1a2-7efc-466f-a622-fe8f668982b8", 00:16:13.006 "is_configured": false, 00:16:13.006 "data_offset": 0, 00:16:13.006 "data_size": 65536 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "name": "BaseBdev3", 00:16:13.006 "uuid": "3e77e072-f9a3-484d-9ade-c3d1b36ee072", 00:16:13.006 "is_configured": true, 00:16:13.006 "data_offset": 0, 00:16:13.006 "data_size": 65536 00:16:13.006 }, 00:16:13.006 { 00:16:13.006 "name": "BaseBdev4", 00:16:13.006 "uuid": "6c50f130-2b06-4619-8478-fbb974f6d1ce", 00:16:13.006 "is_configured": true, 00:16:13.006 "data_offset": 0, 00:16:13.006 "data_size": 65536 00:16:13.006 } 00:16:13.006 ] 00:16:13.006 }' 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.006 13:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.573 [2024-12-06 13:11:00.463641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.573 "name": "Existed_Raid", 00:16:13.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.573 "strip_size_kb": 64, 00:16:13.573 "state": "configuring", 00:16:13.573 "raid_level": "raid0", 00:16:13.573 "superblock": false, 00:16:13.573 "num_base_bdevs": 4, 00:16:13.573 "num_base_bdevs_discovered": 3, 00:16:13.573 "num_base_bdevs_operational": 4, 00:16:13.573 "base_bdevs_list": [ 00:16:13.573 { 00:16:13.573 "name": null, 00:16:13.573 "uuid": "e89c67fd-0827-4674-81ce-b63f7506f884", 00:16:13.573 "is_configured": false, 00:16:13.573 "data_offset": 0, 00:16:13.573 "data_size": 65536 00:16:13.573 }, 00:16:13.573 { 00:16:13.573 "name": "BaseBdev2", 00:16:13.573 "uuid": "8a07d1a2-7efc-466f-a622-fe8f668982b8", 00:16:13.573 "is_configured": true, 00:16:13.573 "data_offset": 0, 00:16:13.573 "data_size": 65536 00:16:13.573 }, 00:16:13.573 { 00:16:13.573 "name": "BaseBdev3", 00:16:13.573 "uuid": "3e77e072-f9a3-484d-9ade-c3d1b36ee072", 00:16:13.573 "is_configured": true, 00:16:13.573 "data_offset": 0, 00:16:13.573 "data_size": 65536 00:16:13.573 }, 00:16:13.573 { 00:16:13.573 "name": "BaseBdev4", 00:16:13.573 "uuid": "6c50f130-2b06-4619-8478-fbb974f6d1ce", 00:16:13.573 "is_configured": true, 00:16:13.573 "data_offset": 0, 00:16:13.573 "data_size": 65536 00:16:13.573 } 00:16:13.573 ] 00:16:13.573 }' 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.573 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.140 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.140 13:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:14.140 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.140 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.140 13:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e89c67fd-0827-4674-81ce-b63f7506f884 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.140 [2024-12-06 13:11:01.110221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:14.140 [2024-12-06 13:11:01.110313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:14.140 [2024-12-06 13:11:01.110329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:14.140 [2024-12-06 13:11:01.110811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:14.140 [2024-12-06 13:11:01.111031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:14.140 [2024-12-06 13:11:01.111056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:14.140 [2024-12-06 13:11:01.111431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.140 NewBaseBdev 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.140 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.140 [ 00:16:14.140 { 00:16:14.140 "name": "NewBaseBdev", 00:16:14.140 "aliases": [ 00:16:14.140 "e89c67fd-0827-4674-81ce-b63f7506f884" 00:16:14.140 ], 00:16:14.140 "product_name": "Malloc disk", 00:16:14.140 "block_size": 512, 00:16:14.140 "num_blocks": 65536, 00:16:14.140 "uuid": "e89c67fd-0827-4674-81ce-b63f7506f884", 00:16:14.140 "assigned_rate_limits": { 00:16:14.140 "rw_ios_per_sec": 0, 00:16:14.140 "rw_mbytes_per_sec": 0, 00:16:14.141 "r_mbytes_per_sec": 0, 00:16:14.141 "w_mbytes_per_sec": 0 00:16:14.141 }, 00:16:14.141 "claimed": true, 00:16:14.141 "claim_type": "exclusive_write", 00:16:14.141 "zoned": false, 00:16:14.141 "supported_io_types": { 00:16:14.141 "read": true, 00:16:14.141 "write": true, 00:16:14.141 "unmap": true, 00:16:14.141 "flush": true, 00:16:14.141 "reset": true, 00:16:14.141 "nvme_admin": false, 00:16:14.141 "nvme_io": false, 00:16:14.141 "nvme_io_md": false, 00:16:14.141 "write_zeroes": true, 00:16:14.141 "zcopy": true, 00:16:14.141 "get_zone_info": false, 00:16:14.141 "zone_management": false, 00:16:14.141 "zone_append": false, 00:16:14.141 "compare": false, 00:16:14.141 "compare_and_write": false, 00:16:14.141 "abort": true, 00:16:14.141 "seek_hole": false, 00:16:14.141 "seek_data": false, 00:16:14.141 "copy": true, 00:16:14.141 "nvme_iov_md": false 00:16:14.141 }, 00:16:14.141 "memory_domains": [ 00:16:14.141 { 00:16:14.141 "dma_device_id": "system", 00:16:14.141 "dma_device_type": 1 00:16:14.141 }, 00:16:14.141 { 00:16:14.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.141 "dma_device_type": 2 00:16:14.141 } 00:16:14.141 ], 00:16:14.141 "driver_specific": {} 00:16:14.141 } 00:16:14.141 ] 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.141 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.399 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.399 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.399 "name": "Existed_Raid", 00:16:14.399 "uuid": "9501dafb-e7da-436d-b36b-98d45290197e", 00:16:14.399 "strip_size_kb": 64, 00:16:14.399 "state": "online", 00:16:14.399 "raid_level": "raid0", 00:16:14.399 "superblock": false, 00:16:14.399 "num_base_bdevs": 4, 00:16:14.399 "num_base_bdevs_discovered": 4, 00:16:14.399 "num_base_bdevs_operational": 4, 00:16:14.399 "base_bdevs_list": [ 00:16:14.399 { 00:16:14.399 "name": "NewBaseBdev", 00:16:14.399 "uuid": "e89c67fd-0827-4674-81ce-b63f7506f884", 00:16:14.399 "is_configured": true, 00:16:14.399 "data_offset": 0, 00:16:14.399 "data_size": 65536 00:16:14.399 }, 00:16:14.399 { 00:16:14.399 "name": "BaseBdev2", 00:16:14.399 "uuid": "8a07d1a2-7efc-466f-a622-fe8f668982b8", 00:16:14.399 "is_configured": true, 00:16:14.399 "data_offset": 0, 00:16:14.399 "data_size": 65536 00:16:14.399 }, 00:16:14.399 { 00:16:14.399 "name": "BaseBdev3", 00:16:14.399 "uuid": "3e77e072-f9a3-484d-9ade-c3d1b36ee072", 00:16:14.399 "is_configured": true, 00:16:14.399 "data_offset": 0, 00:16:14.399 "data_size": 65536 00:16:14.399 }, 00:16:14.399 { 00:16:14.399 "name": "BaseBdev4", 00:16:14.399 "uuid": "6c50f130-2b06-4619-8478-fbb974f6d1ce", 00:16:14.399 "is_configured": true, 00:16:14.399 "data_offset": 0, 00:16:14.399 "data_size": 65536 00:16:14.399 } 00:16:14.399 ] 00:16:14.399 }' 00:16:14.399 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.399 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:14.976 [2024-12-06 13:11:01.691036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:14.976 "name": "Existed_Raid", 00:16:14.976 "aliases": [ 00:16:14.976 "9501dafb-e7da-436d-b36b-98d45290197e" 00:16:14.976 ], 00:16:14.976 "product_name": "Raid Volume", 00:16:14.976 "block_size": 512, 00:16:14.976 "num_blocks": 262144, 00:16:14.976 "uuid": "9501dafb-e7da-436d-b36b-98d45290197e", 00:16:14.976 "assigned_rate_limits": { 00:16:14.976 "rw_ios_per_sec": 0, 00:16:14.976 "rw_mbytes_per_sec": 0, 00:16:14.976 "r_mbytes_per_sec": 0, 00:16:14.976 "w_mbytes_per_sec": 0 00:16:14.976 }, 00:16:14.976 "claimed": false, 00:16:14.976 "zoned": false, 00:16:14.976 "supported_io_types": { 00:16:14.976 "read": true, 00:16:14.976 "write": true, 00:16:14.976 "unmap": true, 00:16:14.976 "flush": true, 00:16:14.976 "reset": true, 00:16:14.976 "nvme_admin": false, 00:16:14.976 "nvme_io": false, 00:16:14.976 "nvme_io_md": false, 00:16:14.976 "write_zeroes": true, 00:16:14.976 "zcopy": false, 00:16:14.976 "get_zone_info": false, 00:16:14.976 "zone_management": false, 00:16:14.976 "zone_append": false, 00:16:14.976 "compare": false, 00:16:14.976 "compare_and_write": false, 00:16:14.976 "abort": false, 00:16:14.976 "seek_hole": false, 00:16:14.976 "seek_data": false, 00:16:14.976 "copy": false, 00:16:14.976 "nvme_iov_md": false 00:16:14.976 }, 00:16:14.976 "memory_domains": [ 00:16:14.976 { 00:16:14.976 "dma_device_id": "system", 00:16:14.976 "dma_device_type": 1 00:16:14.976 }, 00:16:14.976 { 00:16:14.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.976 "dma_device_type": 2 00:16:14.976 }, 00:16:14.976 { 00:16:14.976 "dma_device_id": "system", 00:16:14.976 "dma_device_type": 1 00:16:14.976 }, 00:16:14.976 { 00:16:14.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.976 "dma_device_type": 2 00:16:14.976 }, 00:16:14.976 { 00:16:14.976 "dma_device_id": "system", 00:16:14.976 "dma_device_type": 1 00:16:14.976 }, 00:16:14.976 { 00:16:14.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.976 "dma_device_type": 2 00:16:14.976 }, 00:16:14.976 { 00:16:14.976 "dma_device_id": "system", 00:16:14.976 "dma_device_type": 1 00:16:14.976 }, 00:16:14.976 { 00:16:14.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.976 "dma_device_type": 2 00:16:14.976 } 00:16:14.976 ], 00:16:14.976 "driver_specific": { 00:16:14.976 "raid": { 00:16:14.976 "uuid": "9501dafb-e7da-436d-b36b-98d45290197e", 00:16:14.976 "strip_size_kb": 64, 00:16:14.976 "state": "online", 00:16:14.976 "raid_level": "raid0", 00:16:14.976 "superblock": false, 00:16:14.976 "num_base_bdevs": 4, 00:16:14.976 "num_base_bdevs_discovered": 4, 00:16:14.976 "num_base_bdevs_operational": 4, 00:16:14.976 "base_bdevs_list": [ 00:16:14.976 { 00:16:14.976 "name": "NewBaseBdev", 00:16:14.976 "uuid": "e89c67fd-0827-4674-81ce-b63f7506f884", 00:16:14.976 "is_configured": true, 00:16:14.976 "data_offset": 0, 00:16:14.976 "data_size": 65536 00:16:14.976 }, 00:16:14.976 { 00:16:14.976 "name": "BaseBdev2", 00:16:14.976 "uuid": "8a07d1a2-7efc-466f-a622-fe8f668982b8", 00:16:14.976 "is_configured": true, 00:16:14.976 "data_offset": 0, 00:16:14.976 "data_size": 65536 00:16:14.976 }, 00:16:14.976 { 00:16:14.976 "name": "BaseBdev3", 00:16:14.976 "uuid": "3e77e072-f9a3-484d-9ade-c3d1b36ee072", 00:16:14.976 "is_configured": true, 00:16:14.976 "data_offset": 0, 00:16:14.976 "data_size": 65536 00:16:14.976 }, 00:16:14.976 { 00:16:14.976 "name": "BaseBdev4", 00:16:14.976 "uuid": "6c50f130-2b06-4619-8478-fbb974f6d1ce", 00:16:14.976 "is_configured": true, 00:16:14.976 "data_offset": 0, 00:16:14.976 "data_size": 65536 00:16:14.976 } 00:16:14.976 ] 00:16:14.976 } 00:16:14.976 } 00:16:14.976 }' 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:14.976 BaseBdev2 00:16:14.976 BaseBdev3 00:16:14.976 BaseBdev4' 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.976 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.977 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.977 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.977 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.977 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.977 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:14.977 13:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.977 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.977 13:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.235 [2024-12-06 13:11:02.034665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:15.235 [2024-12-06 13:11:02.034770] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.235 [2024-12-06 13:11:02.034902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.235 [2024-12-06 13:11:02.035028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.235 [2024-12-06 13:11:02.035048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69713 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69713 ']' 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69713 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69713 00:16:15.235 killing process with pid 69713 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69713' 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69713 00:16:15.235 [2024-12-06 13:11:02.070012] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.235 13:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69713 00:16:15.494 [2024-12-06 13:11:02.444444] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.869 13:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:16.869 00:16:16.869 real 0m13.262s 00:16:16.869 user 0m21.726s 00:16:16.869 sys 0m1.972s 00:16:16.869 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.869 13:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.869 ************************************ 00:16:16.869 END TEST raid_state_function_test 00:16:16.869 ************************************ 00:16:16.869 13:11:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:16:16.869 13:11:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:16.869 13:11:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.869 13:11:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.869 ************************************ 00:16:16.869 START TEST raid_state_function_test_sb 00:16:16.869 ************************************ 00:16:16.869 13:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:16:16.869 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:16:16.869 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:16.870 Process raid pid: 70403 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70403 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70403' 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70403 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70403 ']' 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.870 13:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.870 [2024-12-06 13:11:03.779432] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:16:16.870 [2024-12-06 13:11:03.779650] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.137 [2024-12-06 13:11:03.970417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.137 [2024-12-06 13:11:04.123962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.396 [2024-12-06 13:11:04.344013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.397 [2024-12-06 13:11:04.344445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.964 13:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.964 13:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:17.964 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:17.964 13:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.964 13:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.964 [2024-12-06 13:11:04.771691] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.964 [2024-12-06 13:11:04.771803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.964 [2024-12-06 13:11:04.771833] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.964 [2024-12-06 13:11:04.771853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.965 [2024-12-06 13:11:04.771865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.965 [2024-12-06 13:11:04.771883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.965 [2024-12-06 13:11:04.771909] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:17.965 [2024-12-06 13:11:04.771925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.965 "name": "Existed_Raid", 00:16:17.965 "uuid": "c96c11f2-3d08-4c6d-b722-0189fccab6a1", 00:16:17.965 "strip_size_kb": 64, 00:16:17.965 "state": "configuring", 00:16:17.965 "raid_level": "raid0", 00:16:17.965 "superblock": true, 00:16:17.965 "num_base_bdevs": 4, 00:16:17.965 "num_base_bdevs_discovered": 0, 00:16:17.965 "num_base_bdevs_operational": 4, 00:16:17.965 "base_bdevs_list": [ 00:16:17.965 { 00:16:17.965 "name": "BaseBdev1", 00:16:17.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.965 "is_configured": false, 00:16:17.965 "data_offset": 0, 00:16:17.965 "data_size": 0 00:16:17.965 }, 00:16:17.965 { 00:16:17.965 "name": "BaseBdev2", 00:16:17.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.965 "is_configured": false, 00:16:17.965 "data_offset": 0, 00:16:17.965 "data_size": 0 00:16:17.965 }, 00:16:17.965 { 00:16:17.965 "name": "BaseBdev3", 00:16:17.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.965 "is_configured": false, 00:16:17.965 "data_offset": 0, 00:16:17.965 "data_size": 0 00:16:17.965 }, 00:16:17.965 { 00:16:17.965 "name": "BaseBdev4", 00:16:17.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.965 "is_configured": false, 00:16:17.965 "data_offset": 0, 00:16:17.965 "data_size": 0 00:16:17.965 } 00:16:17.965 ] 00:16:17.965 }' 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.965 13:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.537 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.537 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.537 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.537 [2024-12-06 13:11:05.275767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.537 [2024-12-06 13:11:05.275849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:18.537 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.537 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:18.537 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.537 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.537 [2024-12-06 13:11:05.283743] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.537 [2024-12-06 13:11:05.283803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.537 [2024-12-06 13:11:05.283820] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.537 [2024-12-06 13:11:05.283838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.537 [2024-12-06 13:11:05.283849] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.537 [2024-12-06 13:11:05.283865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.537 [2024-12-06 13:11:05.283876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:18.537 [2024-12-06 13:11:05.283892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:18.537 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.537 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.538 [2024-12-06 13:11:05.330475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.538 BaseBdev1 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.538 [ 00:16:18.538 { 00:16:18.538 "name": "BaseBdev1", 00:16:18.538 "aliases": [ 00:16:18.538 "c1943383-d63c-4ff1-b40d-817d9846545e" 00:16:18.538 ], 00:16:18.538 "product_name": "Malloc disk", 00:16:18.538 "block_size": 512, 00:16:18.538 "num_blocks": 65536, 00:16:18.538 "uuid": "c1943383-d63c-4ff1-b40d-817d9846545e", 00:16:18.538 "assigned_rate_limits": { 00:16:18.538 "rw_ios_per_sec": 0, 00:16:18.538 "rw_mbytes_per_sec": 0, 00:16:18.538 "r_mbytes_per_sec": 0, 00:16:18.538 "w_mbytes_per_sec": 0 00:16:18.538 }, 00:16:18.538 "claimed": true, 00:16:18.538 "claim_type": "exclusive_write", 00:16:18.538 "zoned": false, 00:16:18.538 "supported_io_types": { 00:16:18.538 "read": true, 00:16:18.538 "write": true, 00:16:18.538 "unmap": true, 00:16:18.538 "flush": true, 00:16:18.538 "reset": true, 00:16:18.538 "nvme_admin": false, 00:16:18.538 "nvme_io": false, 00:16:18.538 "nvme_io_md": false, 00:16:18.538 "write_zeroes": true, 00:16:18.538 "zcopy": true, 00:16:18.538 "get_zone_info": false, 00:16:18.538 "zone_management": false, 00:16:18.538 "zone_append": false, 00:16:18.538 "compare": false, 00:16:18.538 "compare_and_write": false, 00:16:18.538 "abort": true, 00:16:18.538 "seek_hole": false, 00:16:18.538 "seek_data": false, 00:16:18.538 "copy": true, 00:16:18.538 "nvme_iov_md": false 00:16:18.538 }, 00:16:18.538 "memory_domains": [ 00:16:18.538 { 00:16:18.538 "dma_device_id": "system", 00:16:18.538 "dma_device_type": 1 00:16:18.538 }, 00:16:18.538 { 00:16:18.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.538 "dma_device_type": 2 00:16:18.538 } 00:16:18.538 ], 00:16:18.538 "driver_specific": {} 00:16:18.538 } 00:16:18.538 ] 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.538 "name": "Existed_Raid", 00:16:18.538 "uuid": "bb0cb4ac-2e30-4a1c-a5fe-af0fe0292b08", 00:16:18.538 "strip_size_kb": 64, 00:16:18.538 "state": "configuring", 00:16:18.538 "raid_level": "raid0", 00:16:18.538 "superblock": true, 00:16:18.538 "num_base_bdevs": 4, 00:16:18.538 "num_base_bdevs_discovered": 1, 00:16:18.538 "num_base_bdevs_operational": 4, 00:16:18.538 "base_bdevs_list": [ 00:16:18.538 { 00:16:18.538 "name": "BaseBdev1", 00:16:18.538 "uuid": "c1943383-d63c-4ff1-b40d-817d9846545e", 00:16:18.538 "is_configured": true, 00:16:18.538 "data_offset": 2048, 00:16:18.538 "data_size": 63488 00:16:18.538 }, 00:16:18.538 { 00:16:18.538 "name": "BaseBdev2", 00:16:18.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.538 "is_configured": false, 00:16:18.538 "data_offset": 0, 00:16:18.538 "data_size": 0 00:16:18.538 }, 00:16:18.538 { 00:16:18.538 "name": "BaseBdev3", 00:16:18.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.538 "is_configured": false, 00:16:18.538 "data_offset": 0, 00:16:18.538 "data_size": 0 00:16:18.538 }, 00:16:18.538 { 00:16:18.538 "name": "BaseBdev4", 00:16:18.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.538 "is_configured": false, 00:16:18.538 "data_offset": 0, 00:16:18.538 "data_size": 0 00:16:18.538 } 00:16:18.538 ] 00:16:18.538 }' 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.538 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.121 [2024-12-06 13:11:05.870812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.121 [2024-12-06 13:11:05.870899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.121 [2024-12-06 13:11:05.878838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.121 [2024-12-06 13:11:05.881764] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.121 [2024-12-06 13:11:05.882089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.121 [2024-12-06 13:11:05.882138] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:19.121 [2024-12-06 13:11:05.882164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:19.121 [2024-12-06 13:11:05.882178] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:19.121 [2024-12-06 13:11:05.882196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.121 "name": "Existed_Raid", 00:16:19.121 "uuid": "2020cf2f-0c31-4cb9-bac8-93ae705022e1", 00:16:19.121 "strip_size_kb": 64, 00:16:19.121 "state": "configuring", 00:16:19.121 "raid_level": "raid0", 00:16:19.121 "superblock": true, 00:16:19.121 "num_base_bdevs": 4, 00:16:19.121 "num_base_bdevs_discovered": 1, 00:16:19.121 "num_base_bdevs_operational": 4, 00:16:19.121 "base_bdevs_list": [ 00:16:19.121 { 00:16:19.121 "name": "BaseBdev1", 00:16:19.121 "uuid": "c1943383-d63c-4ff1-b40d-817d9846545e", 00:16:19.121 "is_configured": true, 00:16:19.121 "data_offset": 2048, 00:16:19.121 "data_size": 63488 00:16:19.121 }, 00:16:19.121 { 00:16:19.121 "name": "BaseBdev2", 00:16:19.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.121 "is_configured": false, 00:16:19.121 "data_offset": 0, 00:16:19.121 "data_size": 0 00:16:19.121 }, 00:16:19.121 { 00:16:19.121 "name": "BaseBdev3", 00:16:19.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.121 "is_configured": false, 00:16:19.121 "data_offset": 0, 00:16:19.121 "data_size": 0 00:16:19.121 }, 00:16:19.121 { 00:16:19.121 "name": "BaseBdev4", 00:16:19.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.121 "is_configured": false, 00:16:19.121 "data_offset": 0, 00:16:19.121 "data_size": 0 00:16:19.121 } 00:16:19.121 ] 00:16:19.121 }' 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.121 13:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.380 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:19.380 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.380 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.639 [2024-12-06 13:11:06.421045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.639 BaseBdev2 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.639 [ 00:16:19.639 { 00:16:19.639 "name": "BaseBdev2", 00:16:19.639 "aliases": [ 00:16:19.639 "c4a758eb-a05f-41af-b296-71d0976bd565" 00:16:19.639 ], 00:16:19.639 "product_name": "Malloc disk", 00:16:19.639 "block_size": 512, 00:16:19.639 "num_blocks": 65536, 00:16:19.639 "uuid": "c4a758eb-a05f-41af-b296-71d0976bd565", 00:16:19.639 "assigned_rate_limits": { 00:16:19.639 "rw_ios_per_sec": 0, 00:16:19.639 "rw_mbytes_per_sec": 0, 00:16:19.639 "r_mbytes_per_sec": 0, 00:16:19.639 "w_mbytes_per_sec": 0 00:16:19.639 }, 00:16:19.639 "claimed": true, 00:16:19.639 "claim_type": "exclusive_write", 00:16:19.639 "zoned": false, 00:16:19.639 "supported_io_types": { 00:16:19.639 "read": true, 00:16:19.639 "write": true, 00:16:19.639 "unmap": true, 00:16:19.639 "flush": true, 00:16:19.639 "reset": true, 00:16:19.639 "nvme_admin": false, 00:16:19.639 "nvme_io": false, 00:16:19.639 "nvme_io_md": false, 00:16:19.639 "write_zeroes": true, 00:16:19.639 "zcopy": true, 00:16:19.639 "get_zone_info": false, 00:16:19.639 "zone_management": false, 00:16:19.639 "zone_append": false, 00:16:19.639 "compare": false, 00:16:19.639 "compare_and_write": false, 00:16:19.639 "abort": true, 00:16:19.639 "seek_hole": false, 00:16:19.639 "seek_data": false, 00:16:19.639 "copy": true, 00:16:19.639 "nvme_iov_md": false 00:16:19.639 }, 00:16:19.639 "memory_domains": [ 00:16:19.639 { 00:16:19.639 "dma_device_id": "system", 00:16:19.639 "dma_device_type": 1 00:16:19.639 }, 00:16:19.639 { 00:16:19.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.639 "dma_device_type": 2 00:16:19.639 } 00:16:19.639 ], 00:16:19.639 "driver_specific": {} 00:16:19.639 } 00:16:19.639 ] 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.639 "name": "Existed_Raid", 00:16:19.639 "uuid": "2020cf2f-0c31-4cb9-bac8-93ae705022e1", 00:16:19.639 "strip_size_kb": 64, 00:16:19.639 "state": "configuring", 00:16:19.639 "raid_level": "raid0", 00:16:19.639 "superblock": true, 00:16:19.639 "num_base_bdevs": 4, 00:16:19.639 "num_base_bdevs_discovered": 2, 00:16:19.639 "num_base_bdevs_operational": 4, 00:16:19.639 "base_bdevs_list": [ 00:16:19.639 { 00:16:19.639 "name": "BaseBdev1", 00:16:19.639 "uuid": "c1943383-d63c-4ff1-b40d-817d9846545e", 00:16:19.639 "is_configured": true, 00:16:19.639 "data_offset": 2048, 00:16:19.639 "data_size": 63488 00:16:19.639 }, 00:16:19.639 { 00:16:19.639 "name": "BaseBdev2", 00:16:19.639 "uuid": "c4a758eb-a05f-41af-b296-71d0976bd565", 00:16:19.639 "is_configured": true, 00:16:19.639 "data_offset": 2048, 00:16:19.639 "data_size": 63488 00:16:19.639 }, 00:16:19.639 { 00:16:19.639 "name": "BaseBdev3", 00:16:19.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.639 "is_configured": false, 00:16:19.639 "data_offset": 0, 00:16:19.639 "data_size": 0 00:16:19.639 }, 00:16:19.639 { 00:16:19.639 "name": "BaseBdev4", 00:16:19.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.639 "is_configured": false, 00:16:19.639 "data_offset": 0, 00:16:19.639 "data_size": 0 00:16:19.639 } 00:16:19.639 ] 00:16:19.639 }' 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.639 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.206 13:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:20.206 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.206 13:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.206 [2024-12-06 13:11:07.039156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:20.206 BaseBdev3 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.206 [ 00:16:20.206 { 00:16:20.206 "name": "BaseBdev3", 00:16:20.206 "aliases": [ 00:16:20.206 "e29c5ca0-b3b4-48ff-a54d-519e095f163a" 00:16:20.206 ], 00:16:20.206 "product_name": "Malloc disk", 00:16:20.206 "block_size": 512, 00:16:20.206 "num_blocks": 65536, 00:16:20.206 "uuid": "e29c5ca0-b3b4-48ff-a54d-519e095f163a", 00:16:20.206 "assigned_rate_limits": { 00:16:20.206 "rw_ios_per_sec": 0, 00:16:20.206 "rw_mbytes_per_sec": 0, 00:16:20.206 "r_mbytes_per_sec": 0, 00:16:20.206 "w_mbytes_per_sec": 0 00:16:20.206 }, 00:16:20.206 "claimed": true, 00:16:20.206 "claim_type": "exclusive_write", 00:16:20.206 "zoned": false, 00:16:20.206 "supported_io_types": { 00:16:20.206 "read": true, 00:16:20.206 "write": true, 00:16:20.206 "unmap": true, 00:16:20.206 "flush": true, 00:16:20.206 "reset": true, 00:16:20.206 "nvme_admin": false, 00:16:20.206 "nvme_io": false, 00:16:20.206 "nvme_io_md": false, 00:16:20.206 "write_zeroes": true, 00:16:20.206 "zcopy": true, 00:16:20.206 "get_zone_info": false, 00:16:20.206 "zone_management": false, 00:16:20.206 "zone_append": false, 00:16:20.206 "compare": false, 00:16:20.206 "compare_and_write": false, 00:16:20.206 "abort": true, 00:16:20.206 "seek_hole": false, 00:16:20.206 "seek_data": false, 00:16:20.206 "copy": true, 00:16:20.206 "nvme_iov_md": false 00:16:20.206 }, 00:16:20.206 "memory_domains": [ 00:16:20.206 { 00:16:20.206 "dma_device_id": "system", 00:16:20.206 "dma_device_type": 1 00:16:20.206 }, 00:16:20.206 { 00:16:20.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.206 "dma_device_type": 2 00:16:20.206 } 00:16:20.206 ], 00:16:20.206 "driver_specific": {} 00:16:20.206 } 00:16:20.206 ] 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.206 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.207 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.207 "name": "Existed_Raid", 00:16:20.207 "uuid": "2020cf2f-0c31-4cb9-bac8-93ae705022e1", 00:16:20.207 "strip_size_kb": 64, 00:16:20.207 "state": "configuring", 00:16:20.207 "raid_level": "raid0", 00:16:20.207 "superblock": true, 00:16:20.207 "num_base_bdevs": 4, 00:16:20.207 "num_base_bdevs_discovered": 3, 00:16:20.207 "num_base_bdevs_operational": 4, 00:16:20.207 "base_bdevs_list": [ 00:16:20.207 { 00:16:20.207 "name": "BaseBdev1", 00:16:20.207 "uuid": "c1943383-d63c-4ff1-b40d-817d9846545e", 00:16:20.207 "is_configured": true, 00:16:20.207 "data_offset": 2048, 00:16:20.207 "data_size": 63488 00:16:20.207 }, 00:16:20.207 { 00:16:20.207 "name": "BaseBdev2", 00:16:20.207 "uuid": "c4a758eb-a05f-41af-b296-71d0976bd565", 00:16:20.207 "is_configured": true, 00:16:20.207 "data_offset": 2048, 00:16:20.207 "data_size": 63488 00:16:20.207 }, 00:16:20.207 { 00:16:20.207 "name": "BaseBdev3", 00:16:20.207 "uuid": "e29c5ca0-b3b4-48ff-a54d-519e095f163a", 00:16:20.207 "is_configured": true, 00:16:20.207 "data_offset": 2048, 00:16:20.207 "data_size": 63488 00:16:20.207 }, 00:16:20.207 { 00:16:20.207 "name": "BaseBdev4", 00:16:20.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.207 "is_configured": false, 00:16:20.207 "data_offset": 0, 00:16:20.207 "data_size": 0 00:16:20.207 } 00:16:20.207 ] 00:16:20.207 }' 00:16:20.207 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.207 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.772 [2024-12-06 13:11:07.635896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:20.772 [2024-12-06 13:11:07.636349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:20.772 [2024-12-06 13:11:07.636371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:20.772 BaseBdev4 00:16:20.772 [2024-12-06 13:11:07.636817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:20.772 [2024-12-06 13:11:07.637047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:20.772 [2024-12-06 13:11:07.637082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:20.772 [2024-12-06 13:11:07.637289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.772 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.772 [ 00:16:20.772 { 00:16:20.772 "name": "BaseBdev4", 00:16:20.772 "aliases": [ 00:16:20.772 "f533368d-aa96-443f-9fcd-4a11c7d55589" 00:16:20.772 ], 00:16:20.772 "product_name": "Malloc disk", 00:16:20.772 "block_size": 512, 00:16:20.772 "num_blocks": 65536, 00:16:20.772 "uuid": "f533368d-aa96-443f-9fcd-4a11c7d55589", 00:16:20.772 "assigned_rate_limits": { 00:16:20.772 "rw_ios_per_sec": 0, 00:16:20.772 "rw_mbytes_per_sec": 0, 00:16:20.772 "r_mbytes_per_sec": 0, 00:16:20.772 "w_mbytes_per_sec": 0 00:16:20.772 }, 00:16:20.772 "claimed": true, 00:16:20.772 "claim_type": "exclusive_write", 00:16:20.772 "zoned": false, 00:16:20.772 "supported_io_types": { 00:16:20.772 "read": true, 00:16:20.772 "write": true, 00:16:20.772 "unmap": true, 00:16:20.773 "flush": true, 00:16:20.773 "reset": true, 00:16:20.773 "nvme_admin": false, 00:16:20.773 "nvme_io": false, 00:16:20.773 "nvme_io_md": false, 00:16:20.773 "write_zeroes": true, 00:16:20.773 "zcopy": true, 00:16:20.773 "get_zone_info": false, 00:16:20.773 "zone_management": false, 00:16:20.773 "zone_append": false, 00:16:20.773 "compare": false, 00:16:20.773 "compare_and_write": false, 00:16:20.773 "abort": true, 00:16:20.773 "seek_hole": false, 00:16:20.773 "seek_data": false, 00:16:20.773 "copy": true, 00:16:20.773 "nvme_iov_md": false 00:16:20.773 }, 00:16:20.773 "memory_domains": [ 00:16:20.773 { 00:16:20.773 "dma_device_id": "system", 00:16:20.773 "dma_device_type": 1 00:16:20.773 }, 00:16:20.773 { 00:16:20.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.773 "dma_device_type": 2 00:16:20.773 } 00:16:20.773 ], 00:16:20.773 "driver_specific": {} 00:16:20.773 } 00:16:20.773 ] 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.773 "name": "Existed_Raid", 00:16:20.773 "uuid": "2020cf2f-0c31-4cb9-bac8-93ae705022e1", 00:16:20.773 "strip_size_kb": 64, 00:16:20.773 "state": "online", 00:16:20.773 "raid_level": "raid0", 00:16:20.773 "superblock": true, 00:16:20.773 "num_base_bdevs": 4, 00:16:20.773 "num_base_bdevs_discovered": 4, 00:16:20.773 "num_base_bdevs_operational": 4, 00:16:20.773 "base_bdevs_list": [ 00:16:20.773 { 00:16:20.773 "name": "BaseBdev1", 00:16:20.773 "uuid": "c1943383-d63c-4ff1-b40d-817d9846545e", 00:16:20.773 "is_configured": true, 00:16:20.773 "data_offset": 2048, 00:16:20.773 "data_size": 63488 00:16:20.773 }, 00:16:20.773 { 00:16:20.773 "name": "BaseBdev2", 00:16:20.773 "uuid": "c4a758eb-a05f-41af-b296-71d0976bd565", 00:16:20.773 "is_configured": true, 00:16:20.773 "data_offset": 2048, 00:16:20.773 "data_size": 63488 00:16:20.773 }, 00:16:20.773 { 00:16:20.773 "name": "BaseBdev3", 00:16:20.773 "uuid": "e29c5ca0-b3b4-48ff-a54d-519e095f163a", 00:16:20.773 "is_configured": true, 00:16:20.773 "data_offset": 2048, 00:16:20.773 "data_size": 63488 00:16:20.773 }, 00:16:20.773 { 00:16:20.773 "name": "BaseBdev4", 00:16:20.773 "uuid": "f533368d-aa96-443f-9fcd-4a11c7d55589", 00:16:20.773 "is_configured": true, 00:16:20.773 "data_offset": 2048, 00:16:20.773 "data_size": 63488 00:16:20.773 } 00:16:20.773 ] 00:16:20.773 }' 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.773 13:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.338 [2024-12-06 13:11:08.196753] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.338 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:21.338 "name": "Existed_Raid", 00:16:21.338 "aliases": [ 00:16:21.338 "2020cf2f-0c31-4cb9-bac8-93ae705022e1" 00:16:21.338 ], 00:16:21.338 "product_name": "Raid Volume", 00:16:21.338 "block_size": 512, 00:16:21.338 "num_blocks": 253952, 00:16:21.338 "uuid": "2020cf2f-0c31-4cb9-bac8-93ae705022e1", 00:16:21.338 "assigned_rate_limits": { 00:16:21.338 "rw_ios_per_sec": 0, 00:16:21.338 "rw_mbytes_per_sec": 0, 00:16:21.338 "r_mbytes_per_sec": 0, 00:16:21.338 "w_mbytes_per_sec": 0 00:16:21.338 }, 00:16:21.338 "claimed": false, 00:16:21.338 "zoned": false, 00:16:21.338 "supported_io_types": { 00:16:21.338 "read": true, 00:16:21.338 "write": true, 00:16:21.338 "unmap": true, 00:16:21.338 "flush": true, 00:16:21.338 "reset": true, 00:16:21.338 "nvme_admin": false, 00:16:21.338 "nvme_io": false, 00:16:21.338 "nvme_io_md": false, 00:16:21.338 "write_zeroes": true, 00:16:21.338 "zcopy": false, 00:16:21.338 "get_zone_info": false, 00:16:21.338 "zone_management": false, 00:16:21.338 "zone_append": false, 00:16:21.338 "compare": false, 00:16:21.338 "compare_and_write": false, 00:16:21.338 "abort": false, 00:16:21.338 "seek_hole": false, 00:16:21.338 "seek_data": false, 00:16:21.338 "copy": false, 00:16:21.338 "nvme_iov_md": false 00:16:21.338 }, 00:16:21.338 "memory_domains": [ 00:16:21.338 { 00:16:21.338 "dma_device_id": "system", 00:16:21.338 "dma_device_type": 1 00:16:21.338 }, 00:16:21.338 { 00:16:21.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.338 "dma_device_type": 2 00:16:21.338 }, 00:16:21.338 { 00:16:21.338 "dma_device_id": "system", 00:16:21.338 "dma_device_type": 1 00:16:21.338 }, 00:16:21.338 { 00:16:21.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.338 "dma_device_type": 2 00:16:21.338 }, 00:16:21.338 { 00:16:21.338 "dma_device_id": "system", 00:16:21.338 "dma_device_type": 1 00:16:21.338 }, 00:16:21.338 { 00:16:21.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.338 "dma_device_type": 2 00:16:21.338 }, 00:16:21.338 { 00:16:21.338 "dma_device_id": "system", 00:16:21.338 "dma_device_type": 1 00:16:21.338 }, 00:16:21.338 { 00:16:21.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.338 "dma_device_type": 2 00:16:21.338 } 00:16:21.338 ], 00:16:21.338 "driver_specific": { 00:16:21.338 "raid": { 00:16:21.338 "uuid": "2020cf2f-0c31-4cb9-bac8-93ae705022e1", 00:16:21.338 "strip_size_kb": 64, 00:16:21.338 "state": "online", 00:16:21.338 "raid_level": "raid0", 00:16:21.338 "superblock": true, 00:16:21.338 "num_base_bdevs": 4, 00:16:21.338 "num_base_bdevs_discovered": 4, 00:16:21.338 "num_base_bdevs_operational": 4, 00:16:21.338 "base_bdevs_list": [ 00:16:21.338 { 00:16:21.338 "name": "BaseBdev1", 00:16:21.338 "uuid": "c1943383-d63c-4ff1-b40d-817d9846545e", 00:16:21.338 "is_configured": true, 00:16:21.339 "data_offset": 2048, 00:16:21.339 "data_size": 63488 00:16:21.339 }, 00:16:21.339 { 00:16:21.339 "name": "BaseBdev2", 00:16:21.339 "uuid": "c4a758eb-a05f-41af-b296-71d0976bd565", 00:16:21.339 "is_configured": true, 00:16:21.339 "data_offset": 2048, 00:16:21.339 "data_size": 63488 00:16:21.339 }, 00:16:21.339 { 00:16:21.339 "name": "BaseBdev3", 00:16:21.339 "uuid": "e29c5ca0-b3b4-48ff-a54d-519e095f163a", 00:16:21.339 "is_configured": true, 00:16:21.339 "data_offset": 2048, 00:16:21.339 "data_size": 63488 00:16:21.339 }, 00:16:21.339 { 00:16:21.339 "name": "BaseBdev4", 00:16:21.339 "uuid": "f533368d-aa96-443f-9fcd-4a11c7d55589", 00:16:21.339 "is_configured": true, 00:16:21.339 "data_offset": 2048, 00:16:21.339 "data_size": 63488 00:16:21.339 } 00:16:21.339 ] 00:16:21.339 } 00:16:21.339 } 00:16:21.339 }' 00:16:21.339 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.339 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:21.339 BaseBdev2 00:16:21.339 BaseBdev3 00:16:21.339 BaseBdev4' 00:16:21.339 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.339 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:21.339 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.339 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.339 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:21.339 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.339 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.595 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.596 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.596 [2024-12-06 13:11:08.560431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:21.596 [2024-12-06 13:11:08.560508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.596 [2024-12-06 13:11:08.560596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.854 "name": "Existed_Raid", 00:16:21.854 "uuid": "2020cf2f-0c31-4cb9-bac8-93ae705022e1", 00:16:21.854 "strip_size_kb": 64, 00:16:21.854 "state": "offline", 00:16:21.854 "raid_level": "raid0", 00:16:21.854 "superblock": true, 00:16:21.854 "num_base_bdevs": 4, 00:16:21.854 "num_base_bdevs_discovered": 3, 00:16:21.854 "num_base_bdevs_operational": 3, 00:16:21.854 "base_bdevs_list": [ 00:16:21.854 { 00:16:21.854 "name": null, 00:16:21.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.854 "is_configured": false, 00:16:21.854 "data_offset": 0, 00:16:21.854 "data_size": 63488 00:16:21.854 }, 00:16:21.854 { 00:16:21.854 "name": "BaseBdev2", 00:16:21.854 "uuid": "c4a758eb-a05f-41af-b296-71d0976bd565", 00:16:21.854 "is_configured": true, 00:16:21.854 "data_offset": 2048, 00:16:21.854 "data_size": 63488 00:16:21.854 }, 00:16:21.854 { 00:16:21.854 "name": "BaseBdev3", 00:16:21.854 "uuid": "e29c5ca0-b3b4-48ff-a54d-519e095f163a", 00:16:21.854 "is_configured": true, 00:16:21.854 "data_offset": 2048, 00:16:21.854 "data_size": 63488 00:16:21.854 }, 00:16:21.854 { 00:16:21.854 "name": "BaseBdev4", 00:16:21.854 "uuid": "f533368d-aa96-443f-9fcd-4a11c7d55589", 00:16:21.854 "is_configured": true, 00:16:21.854 "data_offset": 2048, 00:16:21.854 "data_size": 63488 00:16:21.854 } 00:16:21.854 ] 00:16:21.854 }' 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.854 13:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.419 [2024-12-06 13:11:09.236384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.419 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.419 [2024-12-06 13:11:09.385951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.677 [2024-12-06 13:11:09.536344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:22.677 [2024-12-06 13:11:09.536421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.677 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.935 BaseBdev2 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.935 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.935 [ 00:16:22.935 { 00:16:22.935 "name": "BaseBdev2", 00:16:22.935 "aliases": [ 00:16:22.935 "d787a6b6-5fd6-441b-b72b-f323b7192bdc" 00:16:22.935 ], 00:16:22.935 "product_name": "Malloc disk", 00:16:22.935 "block_size": 512, 00:16:22.935 "num_blocks": 65536, 00:16:22.935 "uuid": "d787a6b6-5fd6-441b-b72b-f323b7192bdc", 00:16:22.935 "assigned_rate_limits": { 00:16:22.935 "rw_ios_per_sec": 0, 00:16:22.935 "rw_mbytes_per_sec": 0, 00:16:22.935 "r_mbytes_per_sec": 0, 00:16:22.935 "w_mbytes_per_sec": 0 00:16:22.935 }, 00:16:22.935 "claimed": false, 00:16:22.935 "zoned": false, 00:16:22.935 "supported_io_types": { 00:16:22.935 "read": true, 00:16:22.935 "write": true, 00:16:22.935 "unmap": true, 00:16:22.935 "flush": true, 00:16:22.936 "reset": true, 00:16:22.936 "nvme_admin": false, 00:16:22.936 "nvme_io": false, 00:16:22.936 "nvme_io_md": false, 00:16:22.936 "write_zeroes": true, 00:16:22.936 "zcopy": true, 00:16:22.936 "get_zone_info": false, 00:16:22.936 "zone_management": false, 00:16:22.936 "zone_append": false, 00:16:22.936 "compare": false, 00:16:22.936 "compare_and_write": false, 00:16:22.936 "abort": true, 00:16:22.936 "seek_hole": false, 00:16:22.936 "seek_data": false, 00:16:22.936 "copy": true, 00:16:22.936 "nvme_iov_md": false 00:16:22.936 }, 00:16:22.936 "memory_domains": [ 00:16:22.936 { 00:16:22.936 "dma_device_id": "system", 00:16:22.936 "dma_device_type": 1 00:16:22.936 }, 00:16:22.936 { 00:16:22.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.936 "dma_device_type": 2 00:16:22.936 } 00:16:22.936 ], 00:16:22.936 "driver_specific": {} 00:16:22.936 } 00:16:22.936 ] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.936 BaseBdev3 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.936 [ 00:16:22.936 { 00:16:22.936 "name": "BaseBdev3", 00:16:22.936 "aliases": [ 00:16:22.936 "00a5391f-aed3-4c7f-aa16-d45e9daeacd4" 00:16:22.936 ], 00:16:22.936 "product_name": "Malloc disk", 00:16:22.936 "block_size": 512, 00:16:22.936 "num_blocks": 65536, 00:16:22.936 "uuid": "00a5391f-aed3-4c7f-aa16-d45e9daeacd4", 00:16:22.936 "assigned_rate_limits": { 00:16:22.936 "rw_ios_per_sec": 0, 00:16:22.936 "rw_mbytes_per_sec": 0, 00:16:22.936 "r_mbytes_per_sec": 0, 00:16:22.936 "w_mbytes_per_sec": 0 00:16:22.936 }, 00:16:22.936 "claimed": false, 00:16:22.936 "zoned": false, 00:16:22.936 "supported_io_types": { 00:16:22.936 "read": true, 00:16:22.936 "write": true, 00:16:22.936 "unmap": true, 00:16:22.936 "flush": true, 00:16:22.936 "reset": true, 00:16:22.936 "nvme_admin": false, 00:16:22.936 "nvme_io": false, 00:16:22.936 "nvme_io_md": false, 00:16:22.936 "write_zeroes": true, 00:16:22.936 "zcopy": true, 00:16:22.936 "get_zone_info": false, 00:16:22.936 "zone_management": false, 00:16:22.936 "zone_append": false, 00:16:22.936 "compare": false, 00:16:22.936 "compare_and_write": false, 00:16:22.936 "abort": true, 00:16:22.936 "seek_hole": false, 00:16:22.936 "seek_data": false, 00:16:22.936 "copy": true, 00:16:22.936 "nvme_iov_md": false 00:16:22.936 }, 00:16:22.936 "memory_domains": [ 00:16:22.936 { 00:16:22.936 "dma_device_id": "system", 00:16:22.936 "dma_device_type": 1 00:16:22.936 }, 00:16:22.936 { 00:16:22.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.936 "dma_device_type": 2 00:16:22.936 } 00:16:22.936 ], 00:16:22.936 "driver_specific": {} 00:16:22.936 } 00:16:22.936 ] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.936 BaseBdev4 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.936 [ 00:16:22.936 { 00:16:22.936 "name": "BaseBdev4", 00:16:22.936 "aliases": [ 00:16:22.936 "1a4f80a0-6538-411f-9c5d-cd727ab85f3b" 00:16:22.936 ], 00:16:22.936 "product_name": "Malloc disk", 00:16:22.936 "block_size": 512, 00:16:22.936 "num_blocks": 65536, 00:16:22.936 "uuid": "1a4f80a0-6538-411f-9c5d-cd727ab85f3b", 00:16:22.936 "assigned_rate_limits": { 00:16:22.936 "rw_ios_per_sec": 0, 00:16:22.936 "rw_mbytes_per_sec": 0, 00:16:22.936 "r_mbytes_per_sec": 0, 00:16:22.936 "w_mbytes_per_sec": 0 00:16:22.936 }, 00:16:22.936 "claimed": false, 00:16:22.936 "zoned": false, 00:16:22.936 "supported_io_types": { 00:16:22.936 "read": true, 00:16:22.936 "write": true, 00:16:22.936 "unmap": true, 00:16:22.936 "flush": true, 00:16:22.936 "reset": true, 00:16:22.936 "nvme_admin": false, 00:16:22.936 "nvme_io": false, 00:16:22.936 "nvme_io_md": false, 00:16:22.936 "write_zeroes": true, 00:16:22.936 "zcopy": true, 00:16:22.936 "get_zone_info": false, 00:16:22.936 "zone_management": false, 00:16:22.936 "zone_append": false, 00:16:22.936 "compare": false, 00:16:22.936 "compare_and_write": false, 00:16:22.936 "abort": true, 00:16:22.936 "seek_hole": false, 00:16:22.936 "seek_data": false, 00:16:22.936 "copy": true, 00:16:22.936 "nvme_iov_md": false 00:16:22.936 }, 00:16:22.936 "memory_domains": [ 00:16:22.936 { 00:16:22.936 "dma_device_id": "system", 00:16:22.936 "dma_device_type": 1 00:16:22.936 }, 00:16:22.936 { 00:16:22.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.936 "dma_device_type": 2 00:16:22.936 } 00:16:22.936 ], 00:16:22.936 "driver_specific": {} 00:16:22.936 } 00:16:22.936 ] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.936 [2024-12-06 13:11:09.926926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.936 [2024-12-06 13:11:09.926994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.936 [2024-12-06 13:11:09.927034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.936 [2024-12-06 13:11:09.929788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.936 [2024-12-06 13:11:09.929905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.936 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.937 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.194 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.194 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.194 "name": "Existed_Raid", 00:16:23.194 "uuid": "23d882da-05f9-4dce-ba63-091074ea016f", 00:16:23.194 "strip_size_kb": 64, 00:16:23.194 "state": "configuring", 00:16:23.194 "raid_level": "raid0", 00:16:23.194 "superblock": true, 00:16:23.194 "num_base_bdevs": 4, 00:16:23.194 "num_base_bdevs_discovered": 3, 00:16:23.194 "num_base_bdevs_operational": 4, 00:16:23.194 "base_bdevs_list": [ 00:16:23.194 { 00:16:23.194 "name": "BaseBdev1", 00:16:23.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.194 "is_configured": false, 00:16:23.194 "data_offset": 0, 00:16:23.194 "data_size": 0 00:16:23.194 }, 00:16:23.194 { 00:16:23.194 "name": "BaseBdev2", 00:16:23.194 "uuid": "d787a6b6-5fd6-441b-b72b-f323b7192bdc", 00:16:23.194 "is_configured": true, 00:16:23.194 "data_offset": 2048, 00:16:23.194 "data_size": 63488 00:16:23.194 }, 00:16:23.194 { 00:16:23.194 "name": "BaseBdev3", 00:16:23.194 "uuid": "00a5391f-aed3-4c7f-aa16-d45e9daeacd4", 00:16:23.194 "is_configured": true, 00:16:23.194 "data_offset": 2048, 00:16:23.194 "data_size": 63488 00:16:23.194 }, 00:16:23.194 { 00:16:23.194 "name": "BaseBdev4", 00:16:23.194 "uuid": "1a4f80a0-6538-411f-9c5d-cd727ab85f3b", 00:16:23.194 "is_configured": true, 00:16:23.194 "data_offset": 2048, 00:16:23.194 "data_size": 63488 00:16:23.194 } 00:16:23.194 ] 00:16:23.194 }' 00:16:23.194 13:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.194 13:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.452 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:23.452 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.452 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.711 [2024-12-06 13:11:10.467125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.711 "name": "Existed_Raid", 00:16:23.711 "uuid": "23d882da-05f9-4dce-ba63-091074ea016f", 00:16:23.711 "strip_size_kb": 64, 00:16:23.711 "state": "configuring", 00:16:23.711 "raid_level": "raid0", 00:16:23.711 "superblock": true, 00:16:23.711 "num_base_bdevs": 4, 00:16:23.711 "num_base_bdevs_discovered": 2, 00:16:23.711 "num_base_bdevs_operational": 4, 00:16:23.711 "base_bdevs_list": [ 00:16:23.711 { 00:16:23.711 "name": "BaseBdev1", 00:16:23.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.711 "is_configured": false, 00:16:23.711 "data_offset": 0, 00:16:23.711 "data_size": 0 00:16:23.711 }, 00:16:23.711 { 00:16:23.711 "name": null, 00:16:23.711 "uuid": "d787a6b6-5fd6-441b-b72b-f323b7192bdc", 00:16:23.711 "is_configured": false, 00:16:23.711 "data_offset": 0, 00:16:23.711 "data_size": 63488 00:16:23.711 }, 00:16:23.711 { 00:16:23.711 "name": "BaseBdev3", 00:16:23.711 "uuid": "00a5391f-aed3-4c7f-aa16-d45e9daeacd4", 00:16:23.711 "is_configured": true, 00:16:23.711 "data_offset": 2048, 00:16:23.711 "data_size": 63488 00:16:23.711 }, 00:16:23.711 { 00:16:23.711 "name": "BaseBdev4", 00:16:23.711 "uuid": "1a4f80a0-6538-411f-9c5d-cd727ab85f3b", 00:16:23.711 "is_configured": true, 00:16:23.711 "data_offset": 2048, 00:16:23.711 "data_size": 63488 00:16:23.711 } 00:16:23.711 ] 00:16:23.711 }' 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.711 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.968 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.968 13:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:23.968 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.968 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.225 13:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.225 [2024-12-06 13:11:11.079114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.225 BaseBdev1 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.225 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.225 [ 00:16:24.225 { 00:16:24.225 "name": "BaseBdev1", 00:16:24.225 "aliases": [ 00:16:24.225 "ef14ceac-1bdb-40fc-8d08-bcfe2120b877" 00:16:24.225 ], 00:16:24.225 "product_name": "Malloc disk", 00:16:24.225 "block_size": 512, 00:16:24.225 "num_blocks": 65536, 00:16:24.225 "uuid": "ef14ceac-1bdb-40fc-8d08-bcfe2120b877", 00:16:24.225 "assigned_rate_limits": { 00:16:24.225 "rw_ios_per_sec": 0, 00:16:24.225 "rw_mbytes_per_sec": 0, 00:16:24.225 "r_mbytes_per_sec": 0, 00:16:24.225 "w_mbytes_per_sec": 0 00:16:24.225 }, 00:16:24.225 "claimed": true, 00:16:24.225 "claim_type": "exclusive_write", 00:16:24.225 "zoned": false, 00:16:24.225 "supported_io_types": { 00:16:24.225 "read": true, 00:16:24.225 "write": true, 00:16:24.225 "unmap": true, 00:16:24.225 "flush": true, 00:16:24.225 "reset": true, 00:16:24.225 "nvme_admin": false, 00:16:24.225 "nvme_io": false, 00:16:24.225 "nvme_io_md": false, 00:16:24.225 "write_zeroes": true, 00:16:24.225 "zcopy": true, 00:16:24.225 "get_zone_info": false, 00:16:24.225 "zone_management": false, 00:16:24.225 "zone_append": false, 00:16:24.225 "compare": false, 00:16:24.226 "compare_and_write": false, 00:16:24.226 "abort": true, 00:16:24.226 "seek_hole": false, 00:16:24.226 "seek_data": false, 00:16:24.226 "copy": true, 00:16:24.226 "nvme_iov_md": false 00:16:24.226 }, 00:16:24.226 "memory_domains": [ 00:16:24.226 { 00:16:24.226 "dma_device_id": "system", 00:16:24.226 "dma_device_type": 1 00:16:24.226 }, 00:16:24.226 { 00:16:24.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.226 "dma_device_type": 2 00:16:24.226 } 00:16:24.226 ], 00:16:24.226 "driver_specific": {} 00:16:24.226 } 00:16:24.226 ] 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.226 "name": "Existed_Raid", 00:16:24.226 "uuid": "23d882da-05f9-4dce-ba63-091074ea016f", 00:16:24.226 "strip_size_kb": 64, 00:16:24.226 "state": "configuring", 00:16:24.226 "raid_level": "raid0", 00:16:24.226 "superblock": true, 00:16:24.226 "num_base_bdevs": 4, 00:16:24.226 "num_base_bdevs_discovered": 3, 00:16:24.226 "num_base_bdevs_operational": 4, 00:16:24.226 "base_bdevs_list": [ 00:16:24.226 { 00:16:24.226 "name": "BaseBdev1", 00:16:24.226 "uuid": "ef14ceac-1bdb-40fc-8d08-bcfe2120b877", 00:16:24.226 "is_configured": true, 00:16:24.226 "data_offset": 2048, 00:16:24.226 "data_size": 63488 00:16:24.226 }, 00:16:24.226 { 00:16:24.226 "name": null, 00:16:24.226 "uuid": "d787a6b6-5fd6-441b-b72b-f323b7192bdc", 00:16:24.226 "is_configured": false, 00:16:24.226 "data_offset": 0, 00:16:24.226 "data_size": 63488 00:16:24.226 }, 00:16:24.226 { 00:16:24.226 "name": "BaseBdev3", 00:16:24.226 "uuid": "00a5391f-aed3-4c7f-aa16-d45e9daeacd4", 00:16:24.226 "is_configured": true, 00:16:24.226 "data_offset": 2048, 00:16:24.226 "data_size": 63488 00:16:24.226 }, 00:16:24.226 { 00:16:24.226 "name": "BaseBdev4", 00:16:24.226 "uuid": "1a4f80a0-6538-411f-9c5d-cd727ab85f3b", 00:16:24.226 "is_configured": true, 00:16:24.226 "data_offset": 2048, 00:16:24.226 "data_size": 63488 00:16:24.226 } 00:16:24.226 ] 00:16:24.226 }' 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.226 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.792 [2024-12-06 13:11:11.707450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.792 "name": "Existed_Raid", 00:16:24.792 "uuid": "23d882da-05f9-4dce-ba63-091074ea016f", 00:16:24.792 "strip_size_kb": 64, 00:16:24.792 "state": "configuring", 00:16:24.792 "raid_level": "raid0", 00:16:24.792 "superblock": true, 00:16:24.792 "num_base_bdevs": 4, 00:16:24.792 "num_base_bdevs_discovered": 2, 00:16:24.792 "num_base_bdevs_operational": 4, 00:16:24.792 "base_bdevs_list": [ 00:16:24.792 { 00:16:24.792 "name": "BaseBdev1", 00:16:24.792 "uuid": "ef14ceac-1bdb-40fc-8d08-bcfe2120b877", 00:16:24.792 "is_configured": true, 00:16:24.792 "data_offset": 2048, 00:16:24.792 "data_size": 63488 00:16:24.792 }, 00:16:24.792 { 00:16:24.792 "name": null, 00:16:24.792 "uuid": "d787a6b6-5fd6-441b-b72b-f323b7192bdc", 00:16:24.792 "is_configured": false, 00:16:24.792 "data_offset": 0, 00:16:24.792 "data_size": 63488 00:16:24.792 }, 00:16:24.792 { 00:16:24.792 "name": null, 00:16:24.792 "uuid": "00a5391f-aed3-4c7f-aa16-d45e9daeacd4", 00:16:24.792 "is_configured": false, 00:16:24.792 "data_offset": 0, 00:16:24.792 "data_size": 63488 00:16:24.792 }, 00:16:24.792 { 00:16:24.792 "name": "BaseBdev4", 00:16:24.792 "uuid": "1a4f80a0-6538-411f-9c5d-cd727ab85f3b", 00:16:24.792 "is_configured": true, 00:16:24.792 "data_offset": 2048, 00:16:24.792 "data_size": 63488 00:16:24.792 } 00:16:24.792 ] 00:16:24.792 }' 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.792 13:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.359 [2024-12-06 13:11:12.283617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.359 "name": "Existed_Raid", 00:16:25.359 "uuid": "23d882da-05f9-4dce-ba63-091074ea016f", 00:16:25.359 "strip_size_kb": 64, 00:16:25.359 "state": "configuring", 00:16:25.359 "raid_level": "raid0", 00:16:25.359 "superblock": true, 00:16:25.359 "num_base_bdevs": 4, 00:16:25.359 "num_base_bdevs_discovered": 3, 00:16:25.359 "num_base_bdevs_operational": 4, 00:16:25.359 "base_bdevs_list": [ 00:16:25.359 { 00:16:25.359 "name": "BaseBdev1", 00:16:25.359 "uuid": "ef14ceac-1bdb-40fc-8d08-bcfe2120b877", 00:16:25.359 "is_configured": true, 00:16:25.359 "data_offset": 2048, 00:16:25.359 "data_size": 63488 00:16:25.359 }, 00:16:25.359 { 00:16:25.359 "name": null, 00:16:25.359 "uuid": "d787a6b6-5fd6-441b-b72b-f323b7192bdc", 00:16:25.359 "is_configured": false, 00:16:25.359 "data_offset": 0, 00:16:25.359 "data_size": 63488 00:16:25.359 }, 00:16:25.359 { 00:16:25.359 "name": "BaseBdev3", 00:16:25.359 "uuid": "00a5391f-aed3-4c7f-aa16-d45e9daeacd4", 00:16:25.359 "is_configured": true, 00:16:25.359 "data_offset": 2048, 00:16:25.359 "data_size": 63488 00:16:25.359 }, 00:16:25.359 { 00:16:25.359 "name": "BaseBdev4", 00:16:25.359 "uuid": "1a4f80a0-6538-411f-9c5d-cd727ab85f3b", 00:16:25.359 "is_configured": true, 00:16:25.359 "data_offset": 2048, 00:16:25.359 "data_size": 63488 00:16:25.359 } 00:16:25.359 ] 00:16:25.359 }' 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.359 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.926 [2024-12-06 13:11:12.827813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.926 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.185 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.185 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.185 "name": "Existed_Raid", 00:16:26.185 "uuid": "23d882da-05f9-4dce-ba63-091074ea016f", 00:16:26.185 "strip_size_kb": 64, 00:16:26.185 "state": "configuring", 00:16:26.185 "raid_level": "raid0", 00:16:26.185 "superblock": true, 00:16:26.185 "num_base_bdevs": 4, 00:16:26.185 "num_base_bdevs_discovered": 2, 00:16:26.185 "num_base_bdevs_operational": 4, 00:16:26.185 "base_bdevs_list": [ 00:16:26.185 { 00:16:26.185 "name": null, 00:16:26.185 "uuid": "ef14ceac-1bdb-40fc-8d08-bcfe2120b877", 00:16:26.185 "is_configured": false, 00:16:26.185 "data_offset": 0, 00:16:26.185 "data_size": 63488 00:16:26.185 }, 00:16:26.185 { 00:16:26.185 "name": null, 00:16:26.185 "uuid": "d787a6b6-5fd6-441b-b72b-f323b7192bdc", 00:16:26.185 "is_configured": false, 00:16:26.185 "data_offset": 0, 00:16:26.185 "data_size": 63488 00:16:26.185 }, 00:16:26.185 { 00:16:26.185 "name": "BaseBdev3", 00:16:26.185 "uuid": "00a5391f-aed3-4c7f-aa16-d45e9daeacd4", 00:16:26.185 "is_configured": true, 00:16:26.185 "data_offset": 2048, 00:16:26.185 "data_size": 63488 00:16:26.185 }, 00:16:26.185 { 00:16:26.185 "name": "BaseBdev4", 00:16:26.185 "uuid": "1a4f80a0-6538-411f-9c5d-cd727ab85f3b", 00:16:26.185 "is_configured": true, 00:16:26.185 "data_offset": 2048, 00:16:26.185 "data_size": 63488 00:16:26.185 } 00:16:26.185 ] 00:16:26.185 }' 00:16:26.185 13:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.185 13:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.445 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.445 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:26.445 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.445 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.445 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.445 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:26.445 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:26.445 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.445 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.704 [2024-12-06 13:11:13.460838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.704 "name": "Existed_Raid", 00:16:26.704 "uuid": "23d882da-05f9-4dce-ba63-091074ea016f", 00:16:26.704 "strip_size_kb": 64, 00:16:26.704 "state": "configuring", 00:16:26.704 "raid_level": "raid0", 00:16:26.704 "superblock": true, 00:16:26.704 "num_base_bdevs": 4, 00:16:26.704 "num_base_bdevs_discovered": 3, 00:16:26.704 "num_base_bdevs_operational": 4, 00:16:26.704 "base_bdevs_list": [ 00:16:26.704 { 00:16:26.704 "name": null, 00:16:26.704 "uuid": "ef14ceac-1bdb-40fc-8d08-bcfe2120b877", 00:16:26.704 "is_configured": false, 00:16:26.704 "data_offset": 0, 00:16:26.704 "data_size": 63488 00:16:26.704 }, 00:16:26.704 { 00:16:26.704 "name": "BaseBdev2", 00:16:26.704 "uuid": "d787a6b6-5fd6-441b-b72b-f323b7192bdc", 00:16:26.704 "is_configured": true, 00:16:26.704 "data_offset": 2048, 00:16:26.704 "data_size": 63488 00:16:26.704 }, 00:16:26.704 { 00:16:26.704 "name": "BaseBdev3", 00:16:26.704 "uuid": "00a5391f-aed3-4c7f-aa16-d45e9daeacd4", 00:16:26.704 "is_configured": true, 00:16:26.704 "data_offset": 2048, 00:16:26.704 "data_size": 63488 00:16:26.704 }, 00:16:26.704 { 00:16:26.704 "name": "BaseBdev4", 00:16:26.704 "uuid": "1a4f80a0-6538-411f-9c5d-cd727ab85f3b", 00:16:26.704 "is_configured": true, 00:16:26.704 "data_offset": 2048, 00:16:26.704 "data_size": 63488 00:16:26.704 } 00:16:26.704 ] 00:16:26.704 }' 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.704 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.963 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:26.963 13:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.963 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.963 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.963 13:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ef14ceac-1bdb-40fc-8d08-bcfe2120b877 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.222 [2024-12-06 13:11:14.110882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:27.222 [2024-12-06 13:11:14.111291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:27.222 [2024-12-06 13:11:14.111314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:27.222 NewBaseBdev 00:16:27.222 [2024-12-06 13:11:14.111725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:27.222 [2024-12-06 13:11:14.111932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:27.222 [2024-12-06 13:11:14.111958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:27.222 [2024-12-06 13:11:14.112146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.222 [ 00:16:27.222 { 00:16:27.222 "name": "NewBaseBdev", 00:16:27.222 "aliases": [ 00:16:27.222 "ef14ceac-1bdb-40fc-8d08-bcfe2120b877" 00:16:27.222 ], 00:16:27.222 "product_name": "Malloc disk", 00:16:27.222 "block_size": 512, 00:16:27.222 "num_blocks": 65536, 00:16:27.222 "uuid": "ef14ceac-1bdb-40fc-8d08-bcfe2120b877", 00:16:27.222 "assigned_rate_limits": { 00:16:27.222 "rw_ios_per_sec": 0, 00:16:27.222 "rw_mbytes_per_sec": 0, 00:16:27.222 "r_mbytes_per_sec": 0, 00:16:27.222 "w_mbytes_per_sec": 0 00:16:27.222 }, 00:16:27.222 "claimed": true, 00:16:27.222 "claim_type": "exclusive_write", 00:16:27.222 "zoned": false, 00:16:27.222 "supported_io_types": { 00:16:27.222 "read": true, 00:16:27.222 "write": true, 00:16:27.222 "unmap": true, 00:16:27.222 "flush": true, 00:16:27.222 "reset": true, 00:16:27.222 "nvme_admin": false, 00:16:27.222 "nvme_io": false, 00:16:27.222 "nvme_io_md": false, 00:16:27.222 "write_zeroes": true, 00:16:27.222 "zcopy": true, 00:16:27.222 "get_zone_info": false, 00:16:27.222 "zone_management": false, 00:16:27.222 "zone_append": false, 00:16:27.222 "compare": false, 00:16:27.222 "compare_and_write": false, 00:16:27.222 "abort": true, 00:16:27.222 "seek_hole": false, 00:16:27.222 "seek_data": false, 00:16:27.222 "copy": true, 00:16:27.222 "nvme_iov_md": false 00:16:27.222 }, 00:16:27.222 "memory_domains": [ 00:16:27.222 { 00:16:27.222 "dma_device_id": "system", 00:16:27.222 "dma_device_type": 1 00:16:27.222 }, 00:16:27.222 { 00:16:27.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.222 "dma_device_type": 2 00:16:27.222 } 00:16:27.222 ], 00:16:27.222 "driver_specific": {} 00:16:27.222 } 00:16:27.222 ] 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:27.222 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.223 "name": "Existed_Raid", 00:16:27.223 "uuid": "23d882da-05f9-4dce-ba63-091074ea016f", 00:16:27.223 "strip_size_kb": 64, 00:16:27.223 "state": "online", 00:16:27.223 "raid_level": "raid0", 00:16:27.223 "superblock": true, 00:16:27.223 "num_base_bdevs": 4, 00:16:27.223 "num_base_bdevs_discovered": 4, 00:16:27.223 "num_base_bdevs_operational": 4, 00:16:27.223 "base_bdevs_list": [ 00:16:27.223 { 00:16:27.223 "name": "NewBaseBdev", 00:16:27.223 "uuid": "ef14ceac-1bdb-40fc-8d08-bcfe2120b877", 00:16:27.223 "is_configured": true, 00:16:27.223 "data_offset": 2048, 00:16:27.223 "data_size": 63488 00:16:27.223 }, 00:16:27.223 { 00:16:27.223 "name": "BaseBdev2", 00:16:27.223 "uuid": "d787a6b6-5fd6-441b-b72b-f323b7192bdc", 00:16:27.223 "is_configured": true, 00:16:27.223 "data_offset": 2048, 00:16:27.223 "data_size": 63488 00:16:27.223 }, 00:16:27.223 { 00:16:27.223 "name": "BaseBdev3", 00:16:27.223 "uuid": "00a5391f-aed3-4c7f-aa16-d45e9daeacd4", 00:16:27.223 "is_configured": true, 00:16:27.223 "data_offset": 2048, 00:16:27.223 "data_size": 63488 00:16:27.223 }, 00:16:27.223 { 00:16:27.223 "name": "BaseBdev4", 00:16:27.223 "uuid": "1a4f80a0-6538-411f-9c5d-cd727ab85f3b", 00:16:27.223 "is_configured": true, 00:16:27.223 "data_offset": 2048, 00:16:27.223 "data_size": 63488 00:16:27.223 } 00:16:27.223 ] 00:16:27.223 }' 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.223 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.792 [2024-12-06 13:11:14.679592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.792 "name": "Existed_Raid", 00:16:27.792 "aliases": [ 00:16:27.792 "23d882da-05f9-4dce-ba63-091074ea016f" 00:16:27.792 ], 00:16:27.792 "product_name": "Raid Volume", 00:16:27.792 "block_size": 512, 00:16:27.792 "num_blocks": 253952, 00:16:27.792 "uuid": "23d882da-05f9-4dce-ba63-091074ea016f", 00:16:27.792 "assigned_rate_limits": { 00:16:27.792 "rw_ios_per_sec": 0, 00:16:27.792 "rw_mbytes_per_sec": 0, 00:16:27.792 "r_mbytes_per_sec": 0, 00:16:27.792 "w_mbytes_per_sec": 0 00:16:27.792 }, 00:16:27.792 "claimed": false, 00:16:27.792 "zoned": false, 00:16:27.792 "supported_io_types": { 00:16:27.792 "read": true, 00:16:27.792 "write": true, 00:16:27.792 "unmap": true, 00:16:27.792 "flush": true, 00:16:27.792 "reset": true, 00:16:27.792 "nvme_admin": false, 00:16:27.792 "nvme_io": false, 00:16:27.792 "nvme_io_md": false, 00:16:27.792 "write_zeroes": true, 00:16:27.792 "zcopy": false, 00:16:27.792 "get_zone_info": false, 00:16:27.792 "zone_management": false, 00:16:27.792 "zone_append": false, 00:16:27.792 "compare": false, 00:16:27.792 "compare_and_write": false, 00:16:27.792 "abort": false, 00:16:27.792 "seek_hole": false, 00:16:27.792 "seek_data": false, 00:16:27.792 "copy": false, 00:16:27.792 "nvme_iov_md": false 00:16:27.792 }, 00:16:27.792 "memory_domains": [ 00:16:27.792 { 00:16:27.792 "dma_device_id": "system", 00:16:27.792 "dma_device_type": 1 00:16:27.792 }, 00:16:27.792 { 00:16:27.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.792 "dma_device_type": 2 00:16:27.792 }, 00:16:27.792 { 00:16:27.792 "dma_device_id": "system", 00:16:27.792 "dma_device_type": 1 00:16:27.792 }, 00:16:27.792 { 00:16:27.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.792 "dma_device_type": 2 00:16:27.792 }, 00:16:27.792 { 00:16:27.792 "dma_device_id": "system", 00:16:27.792 "dma_device_type": 1 00:16:27.792 }, 00:16:27.792 { 00:16:27.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.792 "dma_device_type": 2 00:16:27.792 }, 00:16:27.792 { 00:16:27.792 "dma_device_id": "system", 00:16:27.792 "dma_device_type": 1 00:16:27.792 }, 00:16:27.792 { 00:16:27.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.792 "dma_device_type": 2 00:16:27.792 } 00:16:27.792 ], 00:16:27.792 "driver_specific": { 00:16:27.792 "raid": { 00:16:27.792 "uuid": "23d882da-05f9-4dce-ba63-091074ea016f", 00:16:27.792 "strip_size_kb": 64, 00:16:27.792 "state": "online", 00:16:27.792 "raid_level": "raid0", 00:16:27.792 "superblock": true, 00:16:27.792 "num_base_bdevs": 4, 00:16:27.792 "num_base_bdevs_discovered": 4, 00:16:27.792 "num_base_bdevs_operational": 4, 00:16:27.792 "base_bdevs_list": [ 00:16:27.792 { 00:16:27.792 "name": "NewBaseBdev", 00:16:27.792 "uuid": "ef14ceac-1bdb-40fc-8d08-bcfe2120b877", 00:16:27.792 "is_configured": true, 00:16:27.792 "data_offset": 2048, 00:16:27.792 "data_size": 63488 00:16:27.792 }, 00:16:27.792 { 00:16:27.792 "name": "BaseBdev2", 00:16:27.792 "uuid": "d787a6b6-5fd6-441b-b72b-f323b7192bdc", 00:16:27.792 "is_configured": true, 00:16:27.792 "data_offset": 2048, 00:16:27.792 "data_size": 63488 00:16:27.792 }, 00:16:27.792 { 00:16:27.792 "name": "BaseBdev3", 00:16:27.792 "uuid": "00a5391f-aed3-4c7f-aa16-d45e9daeacd4", 00:16:27.792 "is_configured": true, 00:16:27.792 "data_offset": 2048, 00:16:27.792 "data_size": 63488 00:16:27.792 }, 00:16:27.792 { 00:16:27.792 "name": "BaseBdev4", 00:16:27.792 "uuid": "1a4f80a0-6538-411f-9c5d-cd727ab85f3b", 00:16:27.792 "is_configured": true, 00:16:27.792 "data_offset": 2048, 00:16:27.792 "data_size": 63488 00:16:27.792 } 00:16:27.792 ] 00:16:27.792 } 00:16:27.792 } 00:16:27.792 }' 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:27.792 BaseBdev2 00:16:27.792 BaseBdev3 00:16:27.792 BaseBdev4' 00:16:27.792 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.051 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.052 13:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.052 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.052 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.052 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.052 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:28.052 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.052 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.052 [2024-12-06 13:11:15.059197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:28.052 [2024-12-06 13:11:15.059248] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.052 [2024-12-06 13:11:15.059389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.052 [2024-12-06 13:11:15.059533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.052 [2024-12-06 13:11:15.059557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:28.052 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.052 13:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70403 00:16:28.052 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70403 ']' 00:16:28.052 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70403 00:16:28.310 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:28.310 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.310 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70403 00:16:28.310 killing process with pid 70403 00:16:28.310 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.310 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.310 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70403' 00:16:28.310 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70403 00:16:28.310 [2024-12-06 13:11:15.099399] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.310 13:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70403 00:16:28.570 [2024-12-06 13:11:15.486328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.946 13:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:29.946 00:16:29.946 real 0m12.982s 00:16:29.946 user 0m21.254s 00:16:29.946 sys 0m1.905s 00:16:29.946 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.946 ************************************ 00:16:29.946 END TEST raid_state_function_test_sb 00:16:29.946 ************************************ 00:16:29.946 13:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.946 13:11:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:16:29.946 13:11:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:29.946 13:11:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.946 13:11:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.946 ************************************ 00:16:29.946 START TEST raid_superblock_test 00:16:29.946 ************************************ 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71090 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71090 00:16:29.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71090 ']' 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.946 13:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.946 [2024-12-06 13:11:16.808020] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:16:29.946 [2024-12-06 13:11:16.808233] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71090 ] 00:16:30.204 [2024-12-06 13:11:16.998666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.204 [2024-12-06 13:11:17.158031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.462 [2024-12-06 13:11:17.381520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.462 [2024-12-06 13:11:17.381600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.030 malloc1 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.030 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.030 [2024-12-06 13:11:17.908091] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:31.030 [2024-12-06 13:11:17.908208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.031 [2024-12-06 13:11:17.908254] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:31.031 [2024-12-06 13:11:17.908275] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.031 [2024-12-06 13:11:17.911625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.031 [2024-12-06 13:11:17.911679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:31.031 pt1 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.031 malloc2 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.031 [2024-12-06 13:11:17.968175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:31.031 [2024-12-06 13:11:17.968303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.031 [2024-12-06 13:11:17.968354] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:31.031 [2024-12-06 13:11:17.968374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.031 [2024-12-06 13:11:17.971723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.031 [2024-12-06 13:11:17.971779] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:31.031 pt2 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.031 13:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.031 malloc3 00:16:31.031 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.031 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:31.031 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.031 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.031 [2024-12-06 13:11:18.043873] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:31.031 [2024-12-06 13:11:18.044125] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.031 [2024-12-06 13:11:18.044224] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:31.290 [2024-12-06 13:11:18.044451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.290 [2024-12-06 13:11:18.047795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.290 [2024-12-06 13:11:18.047981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:31.290 pt3 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.290 malloc4 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:31.290 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.291 [2024-12-06 13:11:18.104681] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:31.291 [2024-12-06 13:11:18.104778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.291 [2024-12-06 13:11:18.104819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:31.291 [2024-12-06 13:11:18.104838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.291 [2024-12-06 13:11:18.107930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.291 [2024-12-06 13:11:18.107984] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:31.291 pt4 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.291 [2024-12-06 13:11:18.116910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.291 [2024-12-06 13:11:18.119708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.291 [2024-12-06 13:11:18.119856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:31.291 [2024-12-06 13:11:18.119945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:31.291 [2024-12-06 13:11:18.120264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:31.291 [2024-12-06 13:11:18.120286] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:31.291 [2024-12-06 13:11:18.120707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:31.291 [2024-12-06 13:11:18.121011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:31.291 [2024-12-06 13:11:18.121233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:31.291 [2024-12-06 13:11:18.121581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.291 "name": "raid_bdev1", 00:16:31.291 "uuid": "649efa89-ac92-4ebc-bff5-9f080b7b0d2f", 00:16:31.291 "strip_size_kb": 64, 00:16:31.291 "state": "online", 00:16:31.291 "raid_level": "raid0", 00:16:31.291 "superblock": true, 00:16:31.291 "num_base_bdevs": 4, 00:16:31.291 "num_base_bdevs_discovered": 4, 00:16:31.291 "num_base_bdevs_operational": 4, 00:16:31.291 "base_bdevs_list": [ 00:16:31.291 { 00:16:31.291 "name": "pt1", 00:16:31.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.291 "is_configured": true, 00:16:31.291 "data_offset": 2048, 00:16:31.291 "data_size": 63488 00:16:31.291 }, 00:16:31.291 { 00:16:31.291 "name": "pt2", 00:16:31.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.291 "is_configured": true, 00:16:31.291 "data_offset": 2048, 00:16:31.291 "data_size": 63488 00:16:31.291 }, 00:16:31.291 { 00:16:31.291 "name": "pt3", 00:16:31.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.291 "is_configured": true, 00:16:31.291 "data_offset": 2048, 00:16:31.291 "data_size": 63488 00:16:31.291 }, 00:16:31.291 { 00:16:31.291 "name": "pt4", 00:16:31.291 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:31.291 "is_configured": true, 00:16:31.291 "data_offset": 2048, 00:16:31.291 "data_size": 63488 00:16:31.291 } 00:16:31.291 ] 00:16:31.291 }' 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.291 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.858 [2024-12-06 13:11:18.634107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.858 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.858 "name": "raid_bdev1", 00:16:31.858 "aliases": [ 00:16:31.858 "649efa89-ac92-4ebc-bff5-9f080b7b0d2f" 00:16:31.858 ], 00:16:31.858 "product_name": "Raid Volume", 00:16:31.858 "block_size": 512, 00:16:31.858 "num_blocks": 253952, 00:16:31.858 "uuid": "649efa89-ac92-4ebc-bff5-9f080b7b0d2f", 00:16:31.858 "assigned_rate_limits": { 00:16:31.858 "rw_ios_per_sec": 0, 00:16:31.858 "rw_mbytes_per_sec": 0, 00:16:31.858 "r_mbytes_per_sec": 0, 00:16:31.858 "w_mbytes_per_sec": 0 00:16:31.858 }, 00:16:31.858 "claimed": false, 00:16:31.858 "zoned": false, 00:16:31.858 "supported_io_types": { 00:16:31.858 "read": true, 00:16:31.858 "write": true, 00:16:31.858 "unmap": true, 00:16:31.858 "flush": true, 00:16:31.858 "reset": true, 00:16:31.858 "nvme_admin": false, 00:16:31.858 "nvme_io": false, 00:16:31.858 "nvme_io_md": false, 00:16:31.858 "write_zeroes": true, 00:16:31.858 "zcopy": false, 00:16:31.858 "get_zone_info": false, 00:16:31.858 "zone_management": false, 00:16:31.858 "zone_append": false, 00:16:31.858 "compare": false, 00:16:31.858 "compare_and_write": false, 00:16:31.858 "abort": false, 00:16:31.858 "seek_hole": false, 00:16:31.858 "seek_data": false, 00:16:31.858 "copy": false, 00:16:31.858 "nvme_iov_md": false 00:16:31.858 }, 00:16:31.858 "memory_domains": [ 00:16:31.858 { 00:16:31.858 "dma_device_id": "system", 00:16:31.859 "dma_device_type": 1 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.859 "dma_device_type": 2 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "dma_device_id": "system", 00:16:31.859 "dma_device_type": 1 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.859 "dma_device_type": 2 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "dma_device_id": "system", 00:16:31.859 "dma_device_type": 1 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.859 "dma_device_type": 2 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "dma_device_id": "system", 00:16:31.859 "dma_device_type": 1 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.859 "dma_device_type": 2 00:16:31.859 } 00:16:31.859 ], 00:16:31.859 "driver_specific": { 00:16:31.859 "raid": { 00:16:31.859 "uuid": "649efa89-ac92-4ebc-bff5-9f080b7b0d2f", 00:16:31.859 "strip_size_kb": 64, 00:16:31.859 "state": "online", 00:16:31.859 "raid_level": "raid0", 00:16:31.859 "superblock": true, 00:16:31.859 "num_base_bdevs": 4, 00:16:31.859 "num_base_bdevs_discovered": 4, 00:16:31.859 "num_base_bdevs_operational": 4, 00:16:31.859 "base_bdevs_list": [ 00:16:31.859 { 00:16:31.859 "name": "pt1", 00:16:31.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.859 "is_configured": true, 00:16:31.859 "data_offset": 2048, 00:16:31.859 "data_size": 63488 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "name": "pt2", 00:16:31.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.859 "is_configured": true, 00:16:31.859 "data_offset": 2048, 00:16:31.859 "data_size": 63488 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "name": "pt3", 00:16:31.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.859 "is_configured": true, 00:16:31.859 "data_offset": 2048, 00:16:31.859 "data_size": 63488 00:16:31.859 }, 00:16:31.859 { 00:16:31.859 "name": "pt4", 00:16:31.859 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:31.859 "is_configured": true, 00:16:31.859 "data_offset": 2048, 00:16:31.859 "data_size": 63488 00:16:31.859 } 00:16:31.859 ] 00:16:31.859 } 00:16:31.859 } 00:16:31.859 }' 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:31.859 pt2 00:16:31.859 pt3 00:16:31.859 pt4' 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.859 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.119 13:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.119 [2024-12-06 13:11:18.990193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=649efa89-ac92-4ebc-bff5-9f080b7b0d2f 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 649efa89-ac92-4ebc-bff5-9f080b7b0d2f ']' 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.119 [2024-12-06 13:11:19.041780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.119 [2024-12-06 13:11:19.041825] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.119 [2024-12-06 13:11:19.041985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.119 [2024-12-06 13:11:19.042097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.119 [2024-12-06 13:11:19.042126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.119 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.379 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.379 [2024-12-06 13:11:19.201863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:32.379 [2024-12-06 13:11:19.204644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:32.379 [2024-12-06 13:11:19.204873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:32.379 [2024-12-06 13:11:19.204959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:32.379 [2024-12-06 13:11:19.205062] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:32.379 [2024-12-06 13:11:19.205157] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:32.379 [2024-12-06 13:11:19.205201] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:32.379 [2024-12-06 13:11:19.205241] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:32.379 [2024-12-06 13:11:19.205270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.379 [2024-12-06 13:11:19.205296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:32.379 request: 00:16:32.379 { 00:16:32.379 "name": "raid_bdev1", 00:16:32.379 "raid_level": "raid0", 00:16:32.379 "base_bdevs": [ 00:16:32.379 "malloc1", 00:16:32.379 "malloc2", 00:16:32.379 "malloc3", 00:16:32.379 "malloc4" 00:16:32.379 ], 00:16:32.379 "strip_size_kb": 64, 00:16:32.379 "superblock": false, 00:16:32.380 "method": "bdev_raid_create", 00:16:32.380 "req_id": 1 00:16:32.380 } 00:16:32.380 Got JSON-RPC error response 00:16:32.380 response: 00:16:32.380 { 00:16:32.380 "code": -17, 00:16:32.380 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:32.380 } 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.380 [2024-12-06 13:11:19.269976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.380 [2024-12-06 13:11:19.270234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.380 [2024-12-06 13:11:19.270321] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:32.380 [2024-12-06 13:11:19.270505] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.380 [2024-12-06 13:11:19.273908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.380 [2024-12-06 13:11:19.274121] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.380 [2024-12-06 13:11:19.274403] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:32.380 [2024-12-06 13:11:19.274642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.380 pt1 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.380 "name": "raid_bdev1", 00:16:32.380 "uuid": "649efa89-ac92-4ebc-bff5-9f080b7b0d2f", 00:16:32.380 "strip_size_kb": 64, 00:16:32.380 "state": "configuring", 00:16:32.380 "raid_level": "raid0", 00:16:32.380 "superblock": true, 00:16:32.380 "num_base_bdevs": 4, 00:16:32.380 "num_base_bdevs_discovered": 1, 00:16:32.380 "num_base_bdevs_operational": 4, 00:16:32.380 "base_bdevs_list": [ 00:16:32.380 { 00:16:32.380 "name": "pt1", 00:16:32.380 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.380 "is_configured": true, 00:16:32.380 "data_offset": 2048, 00:16:32.380 "data_size": 63488 00:16:32.380 }, 00:16:32.380 { 00:16:32.380 "name": null, 00:16:32.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.380 "is_configured": false, 00:16:32.380 "data_offset": 2048, 00:16:32.380 "data_size": 63488 00:16:32.380 }, 00:16:32.380 { 00:16:32.380 "name": null, 00:16:32.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.380 "is_configured": false, 00:16:32.380 "data_offset": 2048, 00:16:32.380 "data_size": 63488 00:16:32.380 }, 00:16:32.380 { 00:16:32.380 "name": null, 00:16:32.380 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.380 "is_configured": false, 00:16:32.380 "data_offset": 2048, 00:16:32.380 "data_size": 63488 00:16:32.380 } 00:16:32.380 ] 00:16:32.380 }' 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.380 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.948 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:32.948 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.948 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.948 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.948 [2024-12-06 13:11:19.798737] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.948 [2024-12-06 13:11:19.798871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.948 [2024-12-06 13:11:19.798913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:32.948 [2024-12-06 13:11:19.798936] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.948 [2024-12-06 13:11:19.799667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.948 [2024-12-06 13:11:19.799713] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.948 [2024-12-06 13:11:19.799863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:32.948 [2024-12-06 13:11:19.800124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.948 pt2 00:16:32.948 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.948 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:32.948 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.948 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.948 [2024-12-06 13:11:19.806682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.949 "name": "raid_bdev1", 00:16:32.949 "uuid": "649efa89-ac92-4ebc-bff5-9f080b7b0d2f", 00:16:32.949 "strip_size_kb": 64, 00:16:32.949 "state": "configuring", 00:16:32.949 "raid_level": "raid0", 00:16:32.949 "superblock": true, 00:16:32.949 "num_base_bdevs": 4, 00:16:32.949 "num_base_bdevs_discovered": 1, 00:16:32.949 "num_base_bdevs_operational": 4, 00:16:32.949 "base_bdevs_list": [ 00:16:32.949 { 00:16:32.949 "name": "pt1", 00:16:32.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.949 "is_configured": true, 00:16:32.949 "data_offset": 2048, 00:16:32.949 "data_size": 63488 00:16:32.949 }, 00:16:32.949 { 00:16:32.949 "name": null, 00:16:32.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.949 "is_configured": false, 00:16:32.949 "data_offset": 0, 00:16:32.949 "data_size": 63488 00:16:32.949 }, 00:16:32.949 { 00:16:32.949 "name": null, 00:16:32.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.949 "is_configured": false, 00:16:32.949 "data_offset": 2048, 00:16:32.949 "data_size": 63488 00:16:32.949 }, 00:16:32.949 { 00:16:32.949 "name": null, 00:16:32.949 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.949 "is_configured": false, 00:16:32.949 "data_offset": 2048, 00:16:32.949 "data_size": 63488 00:16:32.949 } 00:16:32.949 ] 00:16:32.949 }' 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.949 13:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 [2024-12-06 13:11:20.370875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.516 [2024-12-06 13:11:20.370994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.516 [2024-12-06 13:11:20.371038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:33.516 [2024-12-06 13:11:20.371058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.516 [2024-12-06 13:11:20.371791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.516 [2024-12-06 13:11:20.371822] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.516 [2024-12-06 13:11:20.371965] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:33.516 [2024-12-06 13:11:20.372008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.516 pt2 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 [2024-12-06 13:11:20.378793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:33.516 [2024-12-06 13:11:20.378870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.516 [2024-12-06 13:11:20.378909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:33.516 [2024-12-06 13:11:20.378927] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.516 [2024-12-06 13:11:20.379497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.516 [2024-12-06 13:11:20.379544] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:33.516 [2024-12-06 13:11:20.379658] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:33.516 [2024-12-06 13:11:20.379707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:33.516 pt3 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.516 [2024-12-06 13:11:20.386770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:33.516 [2024-12-06 13:11:20.386838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.516 [2024-12-06 13:11:20.386873] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:33.516 [2024-12-06 13:11:20.386893] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.516 [2024-12-06 13:11:20.387458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.516 [2024-12-06 13:11:20.387521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:33.516 [2024-12-06 13:11:20.387633] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:33.516 [2024-12-06 13:11:20.387678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:33.516 [2024-12-06 13:11:20.387893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:33.516 [2024-12-06 13:11:20.387919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:33.516 [2024-12-06 13:11:20.388257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:33.516 [2024-12-06 13:11:20.388516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:33.516 [2024-12-06 13:11:20.388543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:33.516 [2024-12-06 13:11:20.388730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.516 pt4 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.516 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.517 "name": "raid_bdev1", 00:16:33.517 "uuid": "649efa89-ac92-4ebc-bff5-9f080b7b0d2f", 00:16:33.517 "strip_size_kb": 64, 00:16:33.517 "state": "online", 00:16:33.517 "raid_level": "raid0", 00:16:33.517 "superblock": true, 00:16:33.517 "num_base_bdevs": 4, 00:16:33.517 "num_base_bdevs_discovered": 4, 00:16:33.517 "num_base_bdevs_operational": 4, 00:16:33.517 "base_bdevs_list": [ 00:16:33.517 { 00:16:33.517 "name": "pt1", 00:16:33.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:33.517 "is_configured": true, 00:16:33.517 "data_offset": 2048, 00:16:33.517 "data_size": 63488 00:16:33.517 }, 00:16:33.517 { 00:16:33.517 "name": "pt2", 00:16:33.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.517 "is_configured": true, 00:16:33.517 "data_offset": 2048, 00:16:33.517 "data_size": 63488 00:16:33.517 }, 00:16:33.517 { 00:16:33.517 "name": "pt3", 00:16:33.517 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.517 "is_configured": true, 00:16:33.517 "data_offset": 2048, 00:16:33.517 "data_size": 63488 00:16:33.517 }, 00:16:33.517 { 00:16:33.517 "name": "pt4", 00:16:33.517 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:33.517 "is_configured": true, 00:16:33.517 "data_offset": 2048, 00:16:33.517 "data_size": 63488 00:16:33.517 } 00:16:33.517 ] 00:16:33.517 }' 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.517 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.084 [2024-12-06 13:11:20.947487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:34.084 "name": "raid_bdev1", 00:16:34.084 "aliases": [ 00:16:34.084 "649efa89-ac92-4ebc-bff5-9f080b7b0d2f" 00:16:34.084 ], 00:16:34.084 "product_name": "Raid Volume", 00:16:34.084 "block_size": 512, 00:16:34.084 "num_blocks": 253952, 00:16:34.084 "uuid": "649efa89-ac92-4ebc-bff5-9f080b7b0d2f", 00:16:34.084 "assigned_rate_limits": { 00:16:34.084 "rw_ios_per_sec": 0, 00:16:34.084 "rw_mbytes_per_sec": 0, 00:16:34.084 "r_mbytes_per_sec": 0, 00:16:34.084 "w_mbytes_per_sec": 0 00:16:34.084 }, 00:16:34.084 "claimed": false, 00:16:34.084 "zoned": false, 00:16:34.084 "supported_io_types": { 00:16:34.084 "read": true, 00:16:34.084 "write": true, 00:16:34.084 "unmap": true, 00:16:34.084 "flush": true, 00:16:34.084 "reset": true, 00:16:34.084 "nvme_admin": false, 00:16:34.084 "nvme_io": false, 00:16:34.084 "nvme_io_md": false, 00:16:34.084 "write_zeroes": true, 00:16:34.084 "zcopy": false, 00:16:34.084 "get_zone_info": false, 00:16:34.084 "zone_management": false, 00:16:34.084 "zone_append": false, 00:16:34.084 "compare": false, 00:16:34.084 "compare_and_write": false, 00:16:34.084 "abort": false, 00:16:34.084 "seek_hole": false, 00:16:34.084 "seek_data": false, 00:16:34.084 "copy": false, 00:16:34.084 "nvme_iov_md": false 00:16:34.084 }, 00:16:34.084 "memory_domains": [ 00:16:34.084 { 00:16:34.084 "dma_device_id": "system", 00:16:34.084 "dma_device_type": 1 00:16:34.084 }, 00:16:34.084 { 00:16:34.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.084 "dma_device_type": 2 00:16:34.084 }, 00:16:34.084 { 00:16:34.084 "dma_device_id": "system", 00:16:34.084 "dma_device_type": 1 00:16:34.084 }, 00:16:34.084 { 00:16:34.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.084 "dma_device_type": 2 00:16:34.084 }, 00:16:34.084 { 00:16:34.084 "dma_device_id": "system", 00:16:34.084 "dma_device_type": 1 00:16:34.084 }, 00:16:34.084 { 00:16:34.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.084 "dma_device_type": 2 00:16:34.084 }, 00:16:34.084 { 00:16:34.084 "dma_device_id": "system", 00:16:34.084 "dma_device_type": 1 00:16:34.084 }, 00:16:34.084 { 00:16:34.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.084 "dma_device_type": 2 00:16:34.084 } 00:16:34.084 ], 00:16:34.084 "driver_specific": { 00:16:34.084 "raid": { 00:16:34.084 "uuid": "649efa89-ac92-4ebc-bff5-9f080b7b0d2f", 00:16:34.084 "strip_size_kb": 64, 00:16:34.084 "state": "online", 00:16:34.084 "raid_level": "raid0", 00:16:34.084 "superblock": true, 00:16:34.084 "num_base_bdevs": 4, 00:16:34.084 "num_base_bdevs_discovered": 4, 00:16:34.084 "num_base_bdevs_operational": 4, 00:16:34.084 "base_bdevs_list": [ 00:16:34.084 { 00:16:34.084 "name": "pt1", 00:16:34.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:34.084 "is_configured": true, 00:16:34.084 "data_offset": 2048, 00:16:34.084 "data_size": 63488 00:16:34.084 }, 00:16:34.084 { 00:16:34.084 "name": "pt2", 00:16:34.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.084 "is_configured": true, 00:16:34.084 "data_offset": 2048, 00:16:34.084 "data_size": 63488 00:16:34.084 }, 00:16:34.084 { 00:16:34.084 "name": "pt3", 00:16:34.084 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.084 "is_configured": true, 00:16:34.084 "data_offset": 2048, 00:16:34.084 "data_size": 63488 00:16:34.084 }, 00:16:34.084 { 00:16:34.084 "name": "pt4", 00:16:34.084 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:34.084 "is_configured": true, 00:16:34.084 "data_offset": 2048, 00:16:34.084 "data_size": 63488 00:16:34.084 } 00:16:34.084 ] 00:16:34.084 } 00:16:34.084 } 00:16:34.084 }' 00:16:34.084 13:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:34.084 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:34.084 pt2 00:16:34.084 pt3 00:16:34.084 pt4' 00:16:34.084 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.084 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.343 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.343 [2024-12-06 13:11:21.347561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 649efa89-ac92-4ebc-bff5-9f080b7b0d2f '!=' 649efa89-ac92-4ebc-bff5-9f080b7b0d2f ']' 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71090 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71090 ']' 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71090 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71090 00:16:34.602 killing process with pid 71090 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71090' 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71090 00:16:34.602 13:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71090 00:16:34.602 [2024-12-06 13:11:21.424712] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.602 [2024-12-06 13:11:21.424863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.602 [2024-12-06 13:11:21.424987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.602 [2024-12-06 13:11:21.425008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:34.861 [2024-12-06 13:11:21.811271] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:36.238 13:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:36.238 00:16:36.238 real 0m6.260s 00:16:36.238 user 0m9.244s 00:16:36.238 sys 0m1.035s 00:16:36.238 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.238 ************************************ 00:16:36.238 END TEST raid_superblock_test 00:16:36.238 ************************************ 00:16:36.238 13:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.238 13:11:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:16:36.238 13:11:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:36.238 13:11:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.238 13:11:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:36.238 ************************************ 00:16:36.238 START TEST raid_read_error_test 00:16:36.238 ************************************ 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:36.238 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:36.239 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WGTl5Kcwjj 00:16:36.239 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71360 00:16:36.239 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:36.239 13:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71360 00:16:36.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.239 13:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71360 ']' 00:16:36.239 13:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.239 13:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.239 13:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.239 13:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.239 13:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.239 [2024-12-06 13:11:23.138172] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:16:36.239 [2024-12-06 13:11:23.138678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71360 ] 00:16:36.497 [2024-12-06 13:11:23.318609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.497 [2024-12-06 13:11:23.465488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.756 [2024-12-06 13:11:23.689715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.756 [2024-12-06 13:11:23.690106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.322 BaseBdev1_malloc 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.322 true 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.322 [2024-12-06 13:11:24.142293] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:37.322 [2024-12-06 13:11:24.142653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.322 [2024-12-06 13:11:24.142716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:37.322 [2024-12-06 13:11:24.142750] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.322 [2024-12-06 13:11:24.145937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.322 BaseBdev1 00:16:37.322 [2024-12-06 13:11:24.146177] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.322 BaseBdev2_malloc 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.322 true 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.322 [2024-12-06 13:11:24.210296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:37.322 [2024-12-06 13:11:24.210638] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.322 [2024-12-06 13:11:24.210684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:37.322 [2024-12-06 13:11:24.210724] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.322 [2024-12-06 13:11:24.213791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.322 [2024-12-06 13:11:24.213848] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:37.322 BaseBdev2 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.322 BaseBdev3_malloc 00:16:37.322 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.323 true 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.323 [2024-12-06 13:11:24.287797] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:37.323 [2024-12-06 13:11:24.288141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.323 [2024-12-06 13:11:24.288186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:37.323 [2024-12-06 13:11:24.288212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.323 [2024-12-06 13:11:24.291325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.323 [2024-12-06 13:11:24.291551] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:37.323 BaseBdev3 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.323 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.581 BaseBdev4_malloc 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.581 true 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.581 [2024-12-06 13:11:24.355831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:37.581 [2024-12-06 13:11:24.355951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.581 [2024-12-06 13:11:24.355988] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:37.581 [2024-12-06 13:11:24.356012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.581 [2024-12-06 13:11:24.359169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.581 [2024-12-06 13:11:24.359486] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:37.581 BaseBdev4 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.581 [2024-12-06 13:11:24.367950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.581 [2024-12-06 13:11:24.370674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.581 [2024-12-06 13:11:24.370814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.581 [2024-12-06 13:11:24.370936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:37.581 [2024-12-06 13:11:24.371276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:37.581 [2024-12-06 13:11:24.371311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:37.581 [2024-12-06 13:11:24.371683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:37.581 [2024-12-06 13:11:24.371944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:37.581 [2024-12-06 13:11:24.371977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:37.581 [2024-12-06 13:11:24.372264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.581 "name": "raid_bdev1", 00:16:37.581 "uuid": "8f77c8ff-2b02-4605-8e1f-8a41d66d1f59", 00:16:37.581 "strip_size_kb": 64, 00:16:37.581 "state": "online", 00:16:37.581 "raid_level": "raid0", 00:16:37.581 "superblock": true, 00:16:37.581 "num_base_bdevs": 4, 00:16:37.581 "num_base_bdevs_discovered": 4, 00:16:37.581 "num_base_bdevs_operational": 4, 00:16:37.581 "base_bdevs_list": [ 00:16:37.581 { 00:16:37.581 "name": "BaseBdev1", 00:16:37.581 "uuid": "e6b52c69-9f4e-5b82-bfc4-f6de6ab24159", 00:16:37.581 "is_configured": true, 00:16:37.581 "data_offset": 2048, 00:16:37.581 "data_size": 63488 00:16:37.581 }, 00:16:37.581 { 00:16:37.581 "name": "BaseBdev2", 00:16:37.581 "uuid": "69e8d81a-e6b2-58c8-b1f1-f36e8d915b46", 00:16:37.581 "is_configured": true, 00:16:37.581 "data_offset": 2048, 00:16:37.581 "data_size": 63488 00:16:37.581 }, 00:16:37.581 { 00:16:37.581 "name": "BaseBdev3", 00:16:37.581 "uuid": "cc74f96f-5977-5ae0-b1dc-d6f750b2fab5", 00:16:37.581 "is_configured": true, 00:16:37.581 "data_offset": 2048, 00:16:37.581 "data_size": 63488 00:16:37.581 }, 00:16:37.581 { 00:16:37.581 "name": "BaseBdev4", 00:16:37.581 "uuid": "2268215e-4408-5e40-8737-2c179395d2e6", 00:16:37.581 "is_configured": true, 00:16:37.581 "data_offset": 2048, 00:16:37.581 "data_size": 63488 00:16:37.581 } 00:16:37.581 ] 00:16:37.581 }' 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.581 13:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.146 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:38.146 13:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:38.146 [2024-12-06 13:11:25.018023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.089 "name": "raid_bdev1", 00:16:39.089 "uuid": "8f77c8ff-2b02-4605-8e1f-8a41d66d1f59", 00:16:39.089 "strip_size_kb": 64, 00:16:39.089 "state": "online", 00:16:39.089 "raid_level": "raid0", 00:16:39.089 "superblock": true, 00:16:39.089 "num_base_bdevs": 4, 00:16:39.089 "num_base_bdevs_discovered": 4, 00:16:39.089 "num_base_bdevs_operational": 4, 00:16:39.089 "base_bdevs_list": [ 00:16:39.089 { 00:16:39.089 "name": "BaseBdev1", 00:16:39.089 "uuid": "e6b52c69-9f4e-5b82-bfc4-f6de6ab24159", 00:16:39.089 "is_configured": true, 00:16:39.089 "data_offset": 2048, 00:16:39.089 "data_size": 63488 00:16:39.089 }, 00:16:39.089 { 00:16:39.089 "name": "BaseBdev2", 00:16:39.089 "uuid": "69e8d81a-e6b2-58c8-b1f1-f36e8d915b46", 00:16:39.089 "is_configured": true, 00:16:39.089 "data_offset": 2048, 00:16:39.089 "data_size": 63488 00:16:39.089 }, 00:16:39.089 { 00:16:39.089 "name": "BaseBdev3", 00:16:39.089 "uuid": "cc74f96f-5977-5ae0-b1dc-d6f750b2fab5", 00:16:39.089 "is_configured": true, 00:16:39.089 "data_offset": 2048, 00:16:39.089 "data_size": 63488 00:16:39.089 }, 00:16:39.089 { 00:16:39.089 "name": "BaseBdev4", 00:16:39.089 "uuid": "2268215e-4408-5e40-8737-2c179395d2e6", 00:16:39.089 "is_configured": true, 00:16:39.089 "data_offset": 2048, 00:16:39.089 "data_size": 63488 00:16:39.089 } 00:16:39.089 ] 00:16:39.089 }' 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.089 13:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.654 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:39.654 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.654 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.654 [2024-12-06 13:11:26.433077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.654 [2024-12-06 13:11:26.433441] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.654 { 00:16:39.654 "results": [ 00:16:39.654 { 00:16:39.654 "job": "raid_bdev1", 00:16:39.654 "core_mask": "0x1", 00:16:39.654 "workload": "randrw", 00:16:39.654 "percentage": 50, 00:16:39.654 "status": "finished", 00:16:39.654 "queue_depth": 1, 00:16:39.654 "io_size": 131072, 00:16:39.654 "runtime": 1.41263, 00:16:39.654 "iops": 8931.567360172161, 00:16:39.654 "mibps": 1116.4459200215201, 00:16:39.654 "io_failed": 1, 00:16:39.654 "io_timeout": 0, 00:16:39.654 "avg_latency_us": 156.68774016916672, 00:16:39.654 "min_latency_us": 46.77818181818182, 00:16:39.654 "max_latency_us": 1869.2654545454545 00:16:39.654 } 00:16:39.654 ], 00:16:39.654 "core_count": 1 00:16:39.654 } 00:16:39.654 [2024-12-06 13:11:26.437154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.655 [2024-12-06 13:11:26.437328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.655 [2024-12-06 13:11:26.437407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.655 [2024-12-06 13:11:26.437433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71360 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71360 ']' 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71360 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71360 00:16:39.655 killing process with pid 71360 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71360' 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71360 00:16:39.655 13:11:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71360 00:16:39.655 [2024-12-06 13:11:26.479161] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.912 [2024-12-06 13:11:26.798149] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.286 13:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WGTl5Kcwjj 00:16:41.286 13:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:41.286 13:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:41.286 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:16:41.286 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:41.286 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:41.286 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:41.286 13:11:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:16:41.286 ************************************ 00:16:41.286 END TEST raid_read_error_test 00:16:41.286 ************************************ 00:16:41.286 00:16:41.286 real 0m4.992s 00:16:41.286 user 0m5.999s 00:16:41.286 sys 0m0.667s 00:16:41.286 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.286 13:11:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.286 13:11:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:16:41.286 13:11:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:41.286 13:11:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.286 13:11:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.286 ************************************ 00:16:41.286 START TEST raid_write_error_test 00:16:41.286 ************************************ 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VzZNxksj3l 00:16:41.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71506 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71506 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71506 ']' 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.286 13:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.286 [2024-12-06 13:11:28.194535] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:16:41.287 [2024-12-06 13:11:28.194747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71506 ] 00:16:41.545 [2024-12-06 13:11:28.388489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.545 [2024-12-06 13:11:28.558093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.803 [2024-12-06 13:11:28.784958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.803 [2024-12-06 13:11:28.785059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.370 BaseBdev1_malloc 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.370 true 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.370 [2024-12-06 13:11:29.209099] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:42.370 [2024-12-06 13:11:29.209217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.370 [2024-12-06 13:11:29.209257] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:42.370 [2024-12-06 13:11:29.209278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.370 [2024-12-06 13:11:29.212548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.370 [2024-12-06 13:11:29.212616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:42.370 BaseBdev1 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.370 BaseBdev2_malloc 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.370 true 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.370 [2024-12-06 13:11:29.268913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:42.370 [2024-12-06 13:11:29.269020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.370 [2024-12-06 13:11:29.269051] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:42.370 [2024-12-06 13:11:29.269070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.370 [2024-12-06 13:11:29.272087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.370 [2024-12-06 13:11:29.272379] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:42.370 BaseBdev2 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.370 BaseBdev3_malloc 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.370 true 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.370 [2024-12-06 13:11:29.342392] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:42.370 [2024-12-06 13:11:29.342507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.370 [2024-12-06 13:11:29.342543] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:42.370 [2024-12-06 13:11:29.342563] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.370 [2024-12-06 13:11:29.345780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.370 [2024-12-06 13:11:29.345841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:42.370 BaseBdev3 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.370 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.629 BaseBdev4_malloc 00:16:42.629 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.629 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:42.629 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.630 true 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.630 [2024-12-06 13:11:29.407118] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:42.630 [2024-12-06 13:11:29.407227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.630 [2024-12-06 13:11:29.407263] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:42.630 [2024-12-06 13:11:29.407284] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.630 [2024-12-06 13:11:29.410527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.630 [2024-12-06 13:11:29.410594] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:42.630 BaseBdev4 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.630 [2024-12-06 13:11:29.415465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.630 [2024-12-06 13:11:29.418250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.630 [2024-12-06 13:11:29.418380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.630 [2024-12-06 13:11:29.418520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:42.630 [2024-12-06 13:11:29.418902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:42.630 [2024-12-06 13:11:29.418933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:42.630 [2024-12-06 13:11:29.419322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:42.630 [2024-12-06 13:11:29.419603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:42.630 [2024-12-06 13:11:29.419625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:42.630 [2024-12-06 13:11:29.419943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.630 "name": "raid_bdev1", 00:16:42.630 "uuid": "00fa5686-f939-41ec-9289-bfa484816005", 00:16:42.630 "strip_size_kb": 64, 00:16:42.630 "state": "online", 00:16:42.630 "raid_level": "raid0", 00:16:42.630 "superblock": true, 00:16:42.630 "num_base_bdevs": 4, 00:16:42.630 "num_base_bdevs_discovered": 4, 00:16:42.630 "num_base_bdevs_operational": 4, 00:16:42.630 "base_bdevs_list": [ 00:16:42.630 { 00:16:42.630 "name": "BaseBdev1", 00:16:42.630 "uuid": "ff2545ec-5aa3-5f48-880b-d0973b57c266", 00:16:42.630 "is_configured": true, 00:16:42.630 "data_offset": 2048, 00:16:42.630 "data_size": 63488 00:16:42.630 }, 00:16:42.630 { 00:16:42.630 "name": "BaseBdev2", 00:16:42.630 "uuid": "e7e2197a-1098-502f-89ff-0834271b82e9", 00:16:42.630 "is_configured": true, 00:16:42.630 "data_offset": 2048, 00:16:42.630 "data_size": 63488 00:16:42.630 }, 00:16:42.630 { 00:16:42.630 "name": "BaseBdev3", 00:16:42.630 "uuid": "7e4ffbf4-6eda-5ef0-8328-61cde9925773", 00:16:42.630 "is_configured": true, 00:16:42.630 "data_offset": 2048, 00:16:42.630 "data_size": 63488 00:16:42.630 }, 00:16:42.630 { 00:16:42.630 "name": "BaseBdev4", 00:16:42.630 "uuid": "e42a5912-5164-5e4a-869c-197c309f574e", 00:16:42.630 "is_configured": true, 00:16:42.630 "data_offset": 2048, 00:16:42.630 "data_size": 63488 00:16:42.630 } 00:16:42.630 ] 00:16:42.630 }' 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.630 13:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.256 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:43.256 13:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:43.256 [2024-12-06 13:11:30.089585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:44.223 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.224 13:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.224 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.224 "name": "raid_bdev1", 00:16:44.224 "uuid": "00fa5686-f939-41ec-9289-bfa484816005", 00:16:44.224 "strip_size_kb": 64, 00:16:44.224 "state": "online", 00:16:44.224 "raid_level": "raid0", 00:16:44.224 "superblock": true, 00:16:44.224 "num_base_bdevs": 4, 00:16:44.224 "num_base_bdevs_discovered": 4, 00:16:44.224 "num_base_bdevs_operational": 4, 00:16:44.224 "base_bdevs_list": [ 00:16:44.224 { 00:16:44.224 "name": "BaseBdev1", 00:16:44.224 "uuid": "ff2545ec-5aa3-5f48-880b-d0973b57c266", 00:16:44.224 "is_configured": true, 00:16:44.224 "data_offset": 2048, 00:16:44.224 "data_size": 63488 00:16:44.224 }, 00:16:44.224 { 00:16:44.224 "name": "BaseBdev2", 00:16:44.224 "uuid": "e7e2197a-1098-502f-89ff-0834271b82e9", 00:16:44.224 "is_configured": true, 00:16:44.224 "data_offset": 2048, 00:16:44.224 "data_size": 63488 00:16:44.224 }, 00:16:44.224 { 00:16:44.224 "name": "BaseBdev3", 00:16:44.224 "uuid": "7e4ffbf4-6eda-5ef0-8328-61cde9925773", 00:16:44.224 "is_configured": true, 00:16:44.224 "data_offset": 2048, 00:16:44.224 "data_size": 63488 00:16:44.224 }, 00:16:44.224 { 00:16:44.224 "name": "BaseBdev4", 00:16:44.224 "uuid": "e42a5912-5164-5e4a-869c-197c309f574e", 00:16:44.224 "is_configured": true, 00:16:44.224 "data_offset": 2048, 00:16:44.224 "data_size": 63488 00:16:44.224 } 00:16:44.224 ] 00:16:44.224 }' 00:16:44.224 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.224 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.483 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:44.483 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.483 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.483 [2024-12-06 13:11:31.463441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.483 [2024-12-06 13:11:31.463874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.483 [2024-12-06 13:11:31.467555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.483 { 00:16:44.483 "results": [ 00:16:44.483 { 00:16:44.483 "job": "raid_bdev1", 00:16:44.483 "core_mask": "0x1", 00:16:44.483 "workload": "randrw", 00:16:44.483 "percentage": 50, 00:16:44.483 "status": "finished", 00:16:44.483 "queue_depth": 1, 00:16:44.483 "io_size": 131072, 00:16:44.483 "runtime": 1.371749, 00:16:44.483 "iops": 9455.811522370346, 00:16:44.483 "mibps": 1181.9764402962933, 00:16:44.483 "io_failed": 1, 00:16:44.483 "io_timeout": 0, 00:16:44.483 "avg_latency_us": 148.39985871667648, 00:16:44.483 "min_latency_us": 43.52, 00:16:44.483 "max_latency_us": 1861.8181818181818 00:16:44.483 } 00:16:44.483 ], 00:16:44.483 "core_count": 1 00:16:44.483 } 00:16:44.483 [2024-12-06 13:11:31.467864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.483 [2024-12-06 13:11:31.467944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.483 [2024-12-06 13:11:31.467967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:44.483 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.483 13:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71506 00:16:44.483 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71506 ']' 00:16:44.483 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71506 00:16:44.483 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:44.483 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.483 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71506 00:16:44.741 killing process with pid 71506 00:16:44.741 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.741 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.741 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71506' 00:16:44.741 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71506 00:16:44.741 13:11:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71506 00:16:44.741 [2024-12-06 13:11:31.505097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:45.000 [2024-12-06 13:11:31.825755] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.376 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:46.376 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VzZNxksj3l 00:16:46.376 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:46.376 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:16:46.376 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:46.376 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:46.376 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:46.376 ************************************ 00:16:46.376 END TEST raid_write_error_test 00:16:46.376 ************************************ 00:16:46.376 13:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:16:46.376 00:16:46.376 real 0m4.994s 00:16:46.376 user 0m6.006s 00:16:46.376 sys 0m0.707s 00:16:46.376 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.376 13:11:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.376 13:11:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:46.376 13:11:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:46.376 13:11:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:46.376 13:11:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.376 13:11:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.376 ************************************ 00:16:46.376 START TEST raid_state_function_test 00:16:46.376 ************************************ 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:46.376 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71655 00:16:46.377 Process raid pid: 71655 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71655' 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71655 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71655 ']' 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.377 13:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.377 [2024-12-06 13:11:33.228324] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:16:46.377 [2024-12-06 13:11:33.228525] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.635 [2024-12-06 13:11:33.414930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.635 [2024-12-06 13:11:33.567140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.893 [2024-12-06 13:11:33.797399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.894 [2024-12-06 13:11:33.797445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.461 [2024-12-06 13:11:34.196471] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.461 [2024-12-06 13:11:34.196596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.461 [2024-12-06 13:11:34.196614] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.461 [2024-12-06 13:11:34.196631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.461 [2024-12-06 13:11:34.196641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.461 [2024-12-06 13:11:34.196654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.461 [2024-12-06 13:11:34.196663] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:47.461 [2024-12-06 13:11:34.196677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.461 "name": "Existed_Raid", 00:16:47.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.461 "strip_size_kb": 64, 00:16:47.461 "state": "configuring", 00:16:47.461 "raid_level": "concat", 00:16:47.461 "superblock": false, 00:16:47.461 "num_base_bdevs": 4, 00:16:47.461 "num_base_bdevs_discovered": 0, 00:16:47.461 "num_base_bdevs_operational": 4, 00:16:47.461 "base_bdevs_list": [ 00:16:47.461 { 00:16:47.461 "name": "BaseBdev1", 00:16:47.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.461 "is_configured": false, 00:16:47.461 "data_offset": 0, 00:16:47.461 "data_size": 0 00:16:47.461 }, 00:16:47.461 { 00:16:47.461 "name": "BaseBdev2", 00:16:47.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.461 "is_configured": false, 00:16:47.461 "data_offset": 0, 00:16:47.461 "data_size": 0 00:16:47.461 }, 00:16:47.461 { 00:16:47.461 "name": "BaseBdev3", 00:16:47.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.461 "is_configured": false, 00:16:47.461 "data_offset": 0, 00:16:47.461 "data_size": 0 00:16:47.461 }, 00:16:47.461 { 00:16:47.461 "name": "BaseBdev4", 00:16:47.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.461 "is_configured": false, 00:16:47.461 "data_offset": 0, 00:16:47.461 "data_size": 0 00:16:47.461 } 00:16:47.461 ] 00:16:47.461 }' 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.461 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:47.720 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.720 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 [2024-12-06 13:11:34.692600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.720 [2024-12-06 13:11:34.692669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:47.720 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.720 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:47.720 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.720 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.720 [2024-12-06 13:11:34.704563] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.720 [2024-12-06 13:11:34.704618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.720 [2024-12-06 13:11:34.704634] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.720 [2024-12-06 13:11:34.704650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.720 [2024-12-06 13:11:34.704660] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.720 [2024-12-06 13:11:34.704675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.720 [2024-12-06 13:11:34.704684] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:47.720 [2024-12-06 13:11:34.704698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:47.720 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.720 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:47.720 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.720 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.979 [2024-12-06 13:11:34.752457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.979 BaseBdev1 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.979 [ 00:16:47.979 { 00:16:47.979 "name": "BaseBdev1", 00:16:47.979 "aliases": [ 00:16:47.979 "40442f42-9ab0-4bc4-b108-eed9156182c2" 00:16:47.979 ], 00:16:47.979 "product_name": "Malloc disk", 00:16:47.979 "block_size": 512, 00:16:47.979 "num_blocks": 65536, 00:16:47.979 "uuid": "40442f42-9ab0-4bc4-b108-eed9156182c2", 00:16:47.979 "assigned_rate_limits": { 00:16:47.979 "rw_ios_per_sec": 0, 00:16:47.979 "rw_mbytes_per_sec": 0, 00:16:47.979 "r_mbytes_per_sec": 0, 00:16:47.979 "w_mbytes_per_sec": 0 00:16:47.979 }, 00:16:47.979 "claimed": true, 00:16:47.979 "claim_type": "exclusive_write", 00:16:47.979 "zoned": false, 00:16:47.979 "supported_io_types": { 00:16:47.979 "read": true, 00:16:47.979 "write": true, 00:16:47.979 "unmap": true, 00:16:47.979 "flush": true, 00:16:47.979 "reset": true, 00:16:47.979 "nvme_admin": false, 00:16:47.979 "nvme_io": false, 00:16:47.979 "nvme_io_md": false, 00:16:47.979 "write_zeroes": true, 00:16:47.979 "zcopy": true, 00:16:47.979 "get_zone_info": false, 00:16:47.979 "zone_management": false, 00:16:47.979 "zone_append": false, 00:16:47.979 "compare": false, 00:16:47.979 "compare_and_write": false, 00:16:47.979 "abort": true, 00:16:47.979 "seek_hole": false, 00:16:47.979 "seek_data": false, 00:16:47.979 "copy": true, 00:16:47.979 "nvme_iov_md": false 00:16:47.979 }, 00:16:47.979 "memory_domains": [ 00:16:47.979 { 00:16:47.979 "dma_device_id": "system", 00:16:47.979 "dma_device_type": 1 00:16:47.979 }, 00:16:47.979 { 00:16:47.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.979 "dma_device_type": 2 00:16:47.979 } 00:16:47.979 ], 00:16:47.979 "driver_specific": {} 00:16:47.979 } 00:16:47.979 ] 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.979 "name": "Existed_Raid", 00:16:47.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.979 "strip_size_kb": 64, 00:16:47.979 "state": "configuring", 00:16:47.979 "raid_level": "concat", 00:16:47.979 "superblock": false, 00:16:47.979 "num_base_bdevs": 4, 00:16:47.979 "num_base_bdevs_discovered": 1, 00:16:47.979 "num_base_bdevs_operational": 4, 00:16:47.979 "base_bdevs_list": [ 00:16:47.979 { 00:16:47.979 "name": "BaseBdev1", 00:16:47.979 "uuid": "40442f42-9ab0-4bc4-b108-eed9156182c2", 00:16:47.979 "is_configured": true, 00:16:47.979 "data_offset": 0, 00:16:47.979 "data_size": 65536 00:16:47.979 }, 00:16:47.979 { 00:16:47.979 "name": "BaseBdev2", 00:16:47.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.979 "is_configured": false, 00:16:47.979 "data_offset": 0, 00:16:47.979 "data_size": 0 00:16:47.979 }, 00:16:47.979 { 00:16:47.979 "name": "BaseBdev3", 00:16:47.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.979 "is_configured": false, 00:16:47.979 "data_offset": 0, 00:16:47.979 "data_size": 0 00:16:47.979 }, 00:16:47.979 { 00:16:47.979 "name": "BaseBdev4", 00:16:47.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.979 "is_configured": false, 00:16:47.979 "data_offset": 0, 00:16:47.979 "data_size": 0 00:16:47.979 } 00:16:47.979 ] 00:16:47.979 }' 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.979 13:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.546 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 [2024-12-06 13:11:35.284718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:48.547 [2024-12-06 13:11:35.284813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 [2024-12-06 13:11:35.292758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.547 [2024-12-06 13:11:35.295618] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.547 [2024-12-06 13:11:35.295798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.547 [2024-12-06 13:11:35.295935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.547 [2024-12-06 13:11:35.296000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.547 [2024-12-06 13:11:35.296250] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:48.547 [2024-12-06 13:11:35.296321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.547 "name": "Existed_Raid", 00:16:48.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.547 "strip_size_kb": 64, 00:16:48.547 "state": "configuring", 00:16:48.547 "raid_level": "concat", 00:16:48.547 "superblock": false, 00:16:48.547 "num_base_bdevs": 4, 00:16:48.547 "num_base_bdevs_discovered": 1, 00:16:48.547 "num_base_bdevs_operational": 4, 00:16:48.547 "base_bdevs_list": [ 00:16:48.547 { 00:16:48.547 "name": "BaseBdev1", 00:16:48.547 "uuid": "40442f42-9ab0-4bc4-b108-eed9156182c2", 00:16:48.547 "is_configured": true, 00:16:48.547 "data_offset": 0, 00:16:48.547 "data_size": 65536 00:16:48.547 }, 00:16:48.547 { 00:16:48.547 "name": "BaseBdev2", 00:16:48.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.547 "is_configured": false, 00:16:48.547 "data_offset": 0, 00:16:48.547 "data_size": 0 00:16:48.547 }, 00:16:48.547 { 00:16:48.547 "name": "BaseBdev3", 00:16:48.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.547 "is_configured": false, 00:16:48.547 "data_offset": 0, 00:16:48.547 "data_size": 0 00:16:48.547 }, 00:16:48.547 { 00:16:48.547 "name": "BaseBdev4", 00:16:48.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.547 "is_configured": false, 00:16:48.547 "data_offset": 0, 00:16:48.547 "data_size": 0 00:16:48.547 } 00:16:48.547 ] 00:16:48.547 }' 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.547 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.115 [2024-12-06 13:11:35.867252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.115 BaseBdev2 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.115 [ 00:16:49.115 { 00:16:49.115 "name": "BaseBdev2", 00:16:49.115 "aliases": [ 00:16:49.115 "dbabe703-c03e-46e6-b114-5b3a955fe011" 00:16:49.115 ], 00:16:49.115 "product_name": "Malloc disk", 00:16:49.115 "block_size": 512, 00:16:49.115 "num_blocks": 65536, 00:16:49.115 "uuid": "dbabe703-c03e-46e6-b114-5b3a955fe011", 00:16:49.115 "assigned_rate_limits": { 00:16:49.115 "rw_ios_per_sec": 0, 00:16:49.115 "rw_mbytes_per_sec": 0, 00:16:49.115 "r_mbytes_per_sec": 0, 00:16:49.115 "w_mbytes_per_sec": 0 00:16:49.115 }, 00:16:49.115 "claimed": true, 00:16:49.115 "claim_type": "exclusive_write", 00:16:49.115 "zoned": false, 00:16:49.115 "supported_io_types": { 00:16:49.115 "read": true, 00:16:49.115 "write": true, 00:16:49.115 "unmap": true, 00:16:49.115 "flush": true, 00:16:49.115 "reset": true, 00:16:49.115 "nvme_admin": false, 00:16:49.115 "nvme_io": false, 00:16:49.115 "nvme_io_md": false, 00:16:49.115 "write_zeroes": true, 00:16:49.115 "zcopy": true, 00:16:49.115 "get_zone_info": false, 00:16:49.115 "zone_management": false, 00:16:49.115 "zone_append": false, 00:16:49.115 "compare": false, 00:16:49.115 "compare_and_write": false, 00:16:49.115 "abort": true, 00:16:49.115 "seek_hole": false, 00:16:49.115 "seek_data": false, 00:16:49.115 "copy": true, 00:16:49.115 "nvme_iov_md": false 00:16:49.115 }, 00:16:49.115 "memory_domains": [ 00:16:49.115 { 00:16:49.115 "dma_device_id": "system", 00:16:49.115 "dma_device_type": 1 00:16:49.115 }, 00:16:49.115 { 00:16:49.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.115 "dma_device_type": 2 00:16:49.115 } 00:16:49.115 ], 00:16:49.115 "driver_specific": {} 00:16:49.115 } 00:16:49.115 ] 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.115 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.115 "name": "Existed_Raid", 00:16:49.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.115 "strip_size_kb": 64, 00:16:49.115 "state": "configuring", 00:16:49.116 "raid_level": "concat", 00:16:49.116 "superblock": false, 00:16:49.116 "num_base_bdevs": 4, 00:16:49.116 "num_base_bdevs_discovered": 2, 00:16:49.116 "num_base_bdevs_operational": 4, 00:16:49.116 "base_bdevs_list": [ 00:16:49.116 { 00:16:49.116 "name": "BaseBdev1", 00:16:49.116 "uuid": "40442f42-9ab0-4bc4-b108-eed9156182c2", 00:16:49.116 "is_configured": true, 00:16:49.116 "data_offset": 0, 00:16:49.116 "data_size": 65536 00:16:49.116 }, 00:16:49.116 { 00:16:49.116 "name": "BaseBdev2", 00:16:49.116 "uuid": "dbabe703-c03e-46e6-b114-5b3a955fe011", 00:16:49.116 "is_configured": true, 00:16:49.116 "data_offset": 0, 00:16:49.116 "data_size": 65536 00:16:49.116 }, 00:16:49.116 { 00:16:49.116 "name": "BaseBdev3", 00:16:49.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.116 "is_configured": false, 00:16:49.116 "data_offset": 0, 00:16:49.116 "data_size": 0 00:16:49.116 }, 00:16:49.116 { 00:16:49.116 "name": "BaseBdev4", 00:16:49.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.116 "is_configured": false, 00:16:49.116 "data_offset": 0, 00:16:49.116 "data_size": 0 00:16:49.116 } 00:16:49.116 ] 00:16:49.116 }' 00:16:49.116 13:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.116 13:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.684 [2024-12-06 13:11:36.464243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:49.684 BaseBdev3 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.684 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.685 [ 00:16:49.685 { 00:16:49.685 "name": "BaseBdev3", 00:16:49.685 "aliases": [ 00:16:49.685 "e0836fc9-4303-4eb7-8ab9-1bb7c7fff9ed" 00:16:49.685 ], 00:16:49.685 "product_name": "Malloc disk", 00:16:49.685 "block_size": 512, 00:16:49.685 "num_blocks": 65536, 00:16:49.685 "uuid": "e0836fc9-4303-4eb7-8ab9-1bb7c7fff9ed", 00:16:49.685 "assigned_rate_limits": { 00:16:49.685 "rw_ios_per_sec": 0, 00:16:49.685 "rw_mbytes_per_sec": 0, 00:16:49.685 "r_mbytes_per_sec": 0, 00:16:49.685 "w_mbytes_per_sec": 0 00:16:49.685 }, 00:16:49.685 "claimed": true, 00:16:49.685 "claim_type": "exclusive_write", 00:16:49.685 "zoned": false, 00:16:49.685 "supported_io_types": { 00:16:49.685 "read": true, 00:16:49.685 "write": true, 00:16:49.685 "unmap": true, 00:16:49.685 "flush": true, 00:16:49.685 "reset": true, 00:16:49.685 "nvme_admin": false, 00:16:49.685 "nvme_io": false, 00:16:49.685 "nvme_io_md": false, 00:16:49.685 "write_zeroes": true, 00:16:49.685 "zcopy": true, 00:16:49.685 "get_zone_info": false, 00:16:49.685 "zone_management": false, 00:16:49.685 "zone_append": false, 00:16:49.685 "compare": false, 00:16:49.685 "compare_and_write": false, 00:16:49.685 "abort": true, 00:16:49.685 "seek_hole": false, 00:16:49.685 "seek_data": false, 00:16:49.685 "copy": true, 00:16:49.685 "nvme_iov_md": false 00:16:49.685 }, 00:16:49.685 "memory_domains": [ 00:16:49.685 { 00:16:49.685 "dma_device_id": "system", 00:16:49.685 "dma_device_type": 1 00:16:49.685 }, 00:16:49.685 { 00:16:49.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.685 "dma_device_type": 2 00:16:49.685 } 00:16:49.685 ], 00:16:49.685 "driver_specific": {} 00:16:49.685 } 00:16:49.685 ] 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.685 "name": "Existed_Raid", 00:16:49.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.685 "strip_size_kb": 64, 00:16:49.685 "state": "configuring", 00:16:49.685 "raid_level": "concat", 00:16:49.685 "superblock": false, 00:16:49.685 "num_base_bdevs": 4, 00:16:49.685 "num_base_bdevs_discovered": 3, 00:16:49.685 "num_base_bdevs_operational": 4, 00:16:49.685 "base_bdevs_list": [ 00:16:49.685 { 00:16:49.685 "name": "BaseBdev1", 00:16:49.685 "uuid": "40442f42-9ab0-4bc4-b108-eed9156182c2", 00:16:49.685 "is_configured": true, 00:16:49.685 "data_offset": 0, 00:16:49.685 "data_size": 65536 00:16:49.685 }, 00:16:49.685 { 00:16:49.685 "name": "BaseBdev2", 00:16:49.685 "uuid": "dbabe703-c03e-46e6-b114-5b3a955fe011", 00:16:49.685 "is_configured": true, 00:16:49.685 "data_offset": 0, 00:16:49.685 "data_size": 65536 00:16:49.685 }, 00:16:49.685 { 00:16:49.685 "name": "BaseBdev3", 00:16:49.685 "uuid": "e0836fc9-4303-4eb7-8ab9-1bb7c7fff9ed", 00:16:49.685 "is_configured": true, 00:16:49.685 "data_offset": 0, 00:16:49.685 "data_size": 65536 00:16:49.685 }, 00:16:49.685 { 00:16:49.685 "name": "BaseBdev4", 00:16:49.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.685 "is_configured": false, 00:16:49.685 "data_offset": 0, 00:16:49.685 "data_size": 0 00:16:49.685 } 00:16:49.685 ] 00:16:49.685 }' 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.685 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.254 13:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:50.254 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.254 13:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.254 [2024-12-06 13:11:37.042836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:50.254 [2024-12-06 13:11:37.042921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:50.254 [2024-12-06 13:11:37.042936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:50.254 [2024-12-06 13:11:37.043326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:50.254 [2024-12-06 13:11:37.043590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:50.254 [2024-12-06 13:11:37.043613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:50.254 [2024-12-06 13:11:37.043968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.254 BaseBdev4 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.254 [ 00:16:50.254 { 00:16:50.254 "name": "BaseBdev4", 00:16:50.254 "aliases": [ 00:16:50.254 "000b92da-c0b8-468c-b0f0-22c9ed17f42f" 00:16:50.254 ], 00:16:50.254 "product_name": "Malloc disk", 00:16:50.254 "block_size": 512, 00:16:50.254 "num_blocks": 65536, 00:16:50.254 "uuid": "000b92da-c0b8-468c-b0f0-22c9ed17f42f", 00:16:50.254 "assigned_rate_limits": { 00:16:50.254 "rw_ios_per_sec": 0, 00:16:50.254 "rw_mbytes_per_sec": 0, 00:16:50.254 "r_mbytes_per_sec": 0, 00:16:50.254 "w_mbytes_per_sec": 0 00:16:50.254 }, 00:16:50.254 "claimed": true, 00:16:50.254 "claim_type": "exclusive_write", 00:16:50.254 "zoned": false, 00:16:50.254 "supported_io_types": { 00:16:50.254 "read": true, 00:16:50.254 "write": true, 00:16:50.254 "unmap": true, 00:16:50.254 "flush": true, 00:16:50.254 "reset": true, 00:16:50.254 "nvme_admin": false, 00:16:50.254 "nvme_io": false, 00:16:50.254 "nvme_io_md": false, 00:16:50.254 "write_zeroes": true, 00:16:50.254 "zcopy": true, 00:16:50.254 "get_zone_info": false, 00:16:50.254 "zone_management": false, 00:16:50.254 "zone_append": false, 00:16:50.254 "compare": false, 00:16:50.254 "compare_and_write": false, 00:16:50.254 "abort": true, 00:16:50.254 "seek_hole": false, 00:16:50.254 "seek_data": false, 00:16:50.254 "copy": true, 00:16:50.254 "nvme_iov_md": false 00:16:50.254 }, 00:16:50.254 "memory_domains": [ 00:16:50.254 { 00:16:50.254 "dma_device_id": "system", 00:16:50.254 "dma_device_type": 1 00:16:50.254 }, 00:16:50.254 { 00:16:50.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.254 "dma_device_type": 2 00:16:50.254 } 00:16:50.254 ], 00:16:50.254 "driver_specific": {} 00:16:50.254 } 00:16:50.254 ] 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.254 "name": "Existed_Raid", 00:16:50.254 "uuid": "ab10425e-cdae-4489-a230-2cf8366e00a5", 00:16:50.254 "strip_size_kb": 64, 00:16:50.254 "state": "online", 00:16:50.254 "raid_level": "concat", 00:16:50.254 "superblock": false, 00:16:50.254 "num_base_bdevs": 4, 00:16:50.254 "num_base_bdevs_discovered": 4, 00:16:50.254 "num_base_bdevs_operational": 4, 00:16:50.254 "base_bdevs_list": [ 00:16:50.254 { 00:16:50.254 "name": "BaseBdev1", 00:16:50.254 "uuid": "40442f42-9ab0-4bc4-b108-eed9156182c2", 00:16:50.254 "is_configured": true, 00:16:50.254 "data_offset": 0, 00:16:50.254 "data_size": 65536 00:16:50.254 }, 00:16:50.254 { 00:16:50.254 "name": "BaseBdev2", 00:16:50.254 "uuid": "dbabe703-c03e-46e6-b114-5b3a955fe011", 00:16:50.254 "is_configured": true, 00:16:50.254 "data_offset": 0, 00:16:50.254 "data_size": 65536 00:16:50.254 }, 00:16:50.254 { 00:16:50.254 "name": "BaseBdev3", 00:16:50.254 "uuid": "e0836fc9-4303-4eb7-8ab9-1bb7c7fff9ed", 00:16:50.254 "is_configured": true, 00:16:50.254 "data_offset": 0, 00:16:50.254 "data_size": 65536 00:16:50.254 }, 00:16:50.254 { 00:16:50.254 "name": "BaseBdev4", 00:16:50.254 "uuid": "000b92da-c0b8-468c-b0f0-22c9ed17f42f", 00:16:50.254 "is_configured": true, 00:16:50.254 "data_offset": 0, 00:16:50.254 "data_size": 65536 00:16:50.254 } 00:16:50.254 ] 00:16:50.254 }' 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.254 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.822 [2024-12-06 13:11:37.627574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.822 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.822 "name": "Existed_Raid", 00:16:50.822 "aliases": [ 00:16:50.822 "ab10425e-cdae-4489-a230-2cf8366e00a5" 00:16:50.822 ], 00:16:50.822 "product_name": "Raid Volume", 00:16:50.822 "block_size": 512, 00:16:50.822 "num_blocks": 262144, 00:16:50.822 "uuid": "ab10425e-cdae-4489-a230-2cf8366e00a5", 00:16:50.822 "assigned_rate_limits": { 00:16:50.822 "rw_ios_per_sec": 0, 00:16:50.822 "rw_mbytes_per_sec": 0, 00:16:50.822 "r_mbytes_per_sec": 0, 00:16:50.822 "w_mbytes_per_sec": 0 00:16:50.822 }, 00:16:50.822 "claimed": false, 00:16:50.822 "zoned": false, 00:16:50.822 "supported_io_types": { 00:16:50.822 "read": true, 00:16:50.822 "write": true, 00:16:50.822 "unmap": true, 00:16:50.822 "flush": true, 00:16:50.822 "reset": true, 00:16:50.822 "nvme_admin": false, 00:16:50.822 "nvme_io": false, 00:16:50.822 "nvme_io_md": false, 00:16:50.822 "write_zeroes": true, 00:16:50.822 "zcopy": false, 00:16:50.822 "get_zone_info": false, 00:16:50.822 "zone_management": false, 00:16:50.822 "zone_append": false, 00:16:50.822 "compare": false, 00:16:50.822 "compare_and_write": false, 00:16:50.822 "abort": false, 00:16:50.822 "seek_hole": false, 00:16:50.822 "seek_data": false, 00:16:50.822 "copy": false, 00:16:50.822 "nvme_iov_md": false 00:16:50.822 }, 00:16:50.822 "memory_domains": [ 00:16:50.822 { 00:16:50.822 "dma_device_id": "system", 00:16:50.822 "dma_device_type": 1 00:16:50.822 }, 00:16:50.822 { 00:16:50.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.822 "dma_device_type": 2 00:16:50.822 }, 00:16:50.822 { 00:16:50.822 "dma_device_id": "system", 00:16:50.822 "dma_device_type": 1 00:16:50.822 }, 00:16:50.822 { 00:16:50.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.823 "dma_device_type": 2 00:16:50.823 }, 00:16:50.823 { 00:16:50.823 "dma_device_id": "system", 00:16:50.823 "dma_device_type": 1 00:16:50.823 }, 00:16:50.823 { 00:16:50.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.823 "dma_device_type": 2 00:16:50.823 }, 00:16:50.823 { 00:16:50.823 "dma_device_id": "system", 00:16:50.823 "dma_device_type": 1 00:16:50.823 }, 00:16:50.823 { 00:16:50.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.823 "dma_device_type": 2 00:16:50.823 } 00:16:50.823 ], 00:16:50.823 "driver_specific": { 00:16:50.823 "raid": { 00:16:50.823 "uuid": "ab10425e-cdae-4489-a230-2cf8366e00a5", 00:16:50.823 "strip_size_kb": 64, 00:16:50.823 "state": "online", 00:16:50.823 "raid_level": "concat", 00:16:50.823 "superblock": false, 00:16:50.823 "num_base_bdevs": 4, 00:16:50.823 "num_base_bdevs_discovered": 4, 00:16:50.823 "num_base_bdevs_operational": 4, 00:16:50.823 "base_bdevs_list": [ 00:16:50.823 { 00:16:50.823 "name": "BaseBdev1", 00:16:50.823 "uuid": "40442f42-9ab0-4bc4-b108-eed9156182c2", 00:16:50.823 "is_configured": true, 00:16:50.823 "data_offset": 0, 00:16:50.823 "data_size": 65536 00:16:50.823 }, 00:16:50.823 { 00:16:50.823 "name": "BaseBdev2", 00:16:50.823 "uuid": "dbabe703-c03e-46e6-b114-5b3a955fe011", 00:16:50.823 "is_configured": true, 00:16:50.823 "data_offset": 0, 00:16:50.823 "data_size": 65536 00:16:50.823 }, 00:16:50.823 { 00:16:50.823 "name": "BaseBdev3", 00:16:50.823 "uuid": "e0836fc9-4303-4eb7-8ab9-1bb7c7fff9ed", 00:16:50.823 "is_configured": true, 00:16:50.823 "data_offset": 0, 00:16:50.823 "data_size": 65536 00:16:50.823 }, 00:16:50.823 { 00:16:50.823 "name": "BaseBdev4", 00:16:50.823 "uuid": "000b92da-c0b8-468c-b0f0-22c9ed17f42f", 00:16:50.823 "is_configured": true, 00:16:50.823 "data_offset": 0, 00:16:50.823 "data_size": 65536 00:16:50.823 } 00:16:50.823 ] 00:16:50.823 } 00:16:50.823 } 00:16:50.823 }' 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:50.823 BaseBdev2 00:16:50.823 BaseBdev3 00:16:50.823 BaseBdev4' 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.823 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.082 13:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.082 [2024-12-06 13:11:37.971222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.082 [2024-12-06 13:11:37.971271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.082 [2024-12-06 13:11:37.971347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.082 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.082 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:51.082 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:51.082 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:51.082 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:51.082 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:51.082 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.083 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.341 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.341 "name": "Existed_Raid", 00:16:51.341 "uuid": "ab10425e-cdae-4489-a230-2cf8366e00a5", 00:16:51.341 "strip_size_kb": 64, 00:16:51.341 "state": "offline", 00:16:51.341 "raid_level": "concat", 00:16:51.341 "superblock": false, 00:16:51.341 "num_base_bdevs": 4, 00:16:51.341 "num_base_bdevs_discovered": 3, 00:16:51.341 "num_base_bdevs_operational": 3, 00:16:51.341 "base_bdevs_list": [ 00:16:51.341 { 00:16:51.341 "name": null, 00:16:51.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.341 "is_configured": false, 00:16:51.341 "data_offset": 0, 00:16:51.341 "data_size": 65536 00:16:51.341 }, 00:16:51.341 { 00:16:51.341 "name": "BaseBdev2", 00:16:51.341 "uuid": "dbabe703-c03e-46e6-b114-5b3a955fe011", 00:16:51.341 "is_configured": true, 00:16:51.341 "data_offset": 0, 00:16:51.341 "data_size": 65536 00:16:51.341 }, 00:16:51.341 { 00:16:51.341 "name": "BaseBdev3", 00:16:51.341 "uuid": "e0836fc9-4303-4eb7-8ab9-1bb7c7fff9ed", 00:16:51.341 "is_configured": true, 00:16:51.341 "data_offset": 0, 00:16:51.341 "data_size": 65536 00:16:51.341 }, 00:16:51.341 { 00:16:51.341 "name": "BaseBdev4", 00:16:51.341 "uuid": "000b92da-c0b8-468c-b0f0-22c9ed17f42f", 00:16:51.341 "is_configured": true, 00:16:51.341 "data_offset": 0, 00:16:51.341 "data_size": 65536 00:16:51.341 } 00:16:51.341 ] 00:16:51.341 }' 00:16:51.341 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.341 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.907 [2024-12-06 13:11:38.669936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.907 [2024-12-06 13:11:38.822931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:51.907 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.165 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.165 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.165 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.165 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:52.165 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.165 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:52.165 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:52.165 13:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:52.165 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.165 13:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.165 [2024-12-06 13:11:38.976623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:52.165 [2024-12-06 13:11:38.976724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.165 BaseBdev2 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.165 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.423 [ 00:16:52.423 { 00:16:52.423 "name": "BaseBdev2", 00:16:52.423 "aliases": [ 00:16:52.423 "24add9ee-aa9b-44e5-9e46-68b5734c1f58" 00:16:52.423 ], 00:16:52.423 "product_name": "Malloc disk", 00:16:52.423 "block_size": 512, 00:16:52.423 "num_blocks": 65536, 00:16:52.423 "uuid": "24add9ee-aa9b-44e5-9e46-68b5734c1f58", 00:16:52.423 "assigned_rate_limits": { 00:16:52.423 "rw_ios_per_sec": 0, 00:16:52.423 "rw_mbytes_per_sec": 0, 00:16:52.423 "r_mbytes_per_sec": 0, 00:16:52.423 "w_mbytes_per_sec": 0 00:16:52.423 }, 00:16:52.423 "claimed": false, 00:16:52.423 "zoned": false, 00:16:52.423 "supported_io_types": { 00:16:52.423 "read": true, 00:16:52.423 "write": true, 00:16:52.423 "unmap": true, 00:16:52.423 "flush": true, 00:16:52.423 "reset": true, 00:16:52.423 "nvme_admin": false, 00:16:52.423 "nvme_io": false, 00:16:52.423 "nvme_io_md": false, 00:16:52.423 "write_zeroes": true, 00:16:52.423 "zcopy": true, 00:16:52.423 "get_zone_info": false, 00:16:52.423 "zone_management": false, 00:16:52.423 "zone_append": false, 00:16:52.423 "compare": false, 00:16:52.423 "compare_and_write": false, 00:16:52.423 "abort": true, 00:16:52.423 "seek_hole": false, 00:16:52.423 "seek_data": false, 00:16:52.423 "copy": true, 00:16:52.423 "nvme_iov_md": false 00:16:52.423 }, 00:16:52.423 "memory_domains": [ 00:16:52.424 { 00:16:52.424 "dma_device_id": "system", 00:16:52.424 "dma_device_type": 1 00:16:52.424 }, 00:16:52.424 { 00:16:52.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.424 "dma_device_type": 2 00:16:52.424 } 00:16:52.424 ], 00:16:52.424 "driver_specific": {} 00:16:52.424 } 00:16:52.424 ] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.424 BaseBdev3 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.424 [ 00:16:52.424 { 00:16:52.424 "name": "BaseBdev3", 00:16:52.424 "aliases": [ 00:16:52.424 "d281ad12-63ab-483e-b0b0-7df98b6db80f" 00:16:52.424 ], 00:16:52.424 "product_name": "Malloc disk", 00:16:52.424 "block_size": 512, 00:16:52.424 "num_blocks": 65536, 00:16:52.424 "uuid": "d281ad12-63ab-483e-b0b0-7df98b6db80f", 00:16:52.424 "assigned_rate_limits": { 00:16:52.424 "rw_ios_per_sec": 0, 00:16:52.424 "rw_mbytes_per_sec": 0, 00:16:52.424 "r_mbytes_per_sec": 0, 00:16:52.424 "w_mbytes_per_sec": 0 00:16:52.424 }, 00:16:52.424 "claimed": false, 00:16:52.424 "zoned": false, 00:16:52.424 "supported_io_types": { 00:16:52.424 "read": true, 00:16:52.424 "write": true, 00:16:52.424 "unmap": true, 00:16:52.424 "flush": true, 00:16:52.424 "reset": true, 00:16:52.424 "nvme_admin": false, 00:16:52.424 "nvme_io": false, 00:16:52.424 "nvme_io_md": false, 00:16:52.424 "write_zeroes": true, 00:16:52.424 "zcopy": true, 00:16:52.424 "get_zone_info": false, 00:16:52.424 "zone_management": false, 00:16:52.424 "zone_append": false, 00:16:52.424 "compare": false, 00:16:52.424 "compare_and_write": false, 00:16:52.424 "abort": true, 00:16:52.424 "seek_hole": false, 00:16:52.424 "seek_data": false, 00:16:52.424 "copy": true, 00:16:52.424 "nvme_iov_md": false 00:16:52.424 }, 00:16:52.424 "memory_domains": [ 00:16:52.424 { 00:16:52.424 "dma_device_id": "system", 00:16:52.424 "dma_device_type": 1 00:16:52.424 }, 00:16:52.424 { 00:16:52.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.424 "dma_device_type": 2 00:16:52.424 } 00:16:52.424 ], 00:16:52.424 "driver_specific": {} 00:16:52.424 } 00:16:52.424 ] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.424 BaseBdev4 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.424 [ 00:16:52.424 { 00:16:52.424 "name": "BaseBdev4", 00:16:52.424 "aliases": [ 00:16:52.424 "0304b2d4-171f-415a-afa4-7f061ad1ff11" 00:16:52.424 ], 00:16:52.424 "product_name": "Malloc disk", 00:16:52.424 "block_size": 512, 00:16:52.424 "num_blocks": 65536, 00:16:52.424 "uuid": "0304b2d4-171f-415a-afa4-7f061ad1ff11", 00:16:52.424 "assigned_rate_limits": { 00:16:52.424 "rw_ios_per_sec": 0, 00:16:52.424 "rw_mbytes_per_sec": 0, 00:16:52.424 "r_mbytes_per_sec": 0, 00:16:52.424 "w_mbytes_per_sec": 0 00:16:52.424 }, 00:16:52.424 "claimed": false, 00:16:52.424 "zoned": false, 00:16:52.424 "supported_io_types": { 00:16:52.424 "read": true, 00:16:52.424 "write": true, 00:16:52.424 "unmap": true, 00:16:52.424 "flush": true, 00:16:52.424 "reset": true, 00:16:52.424 "nvme_admin": false, 00:16:52.424 "nvme_io": false, 00:16:52.424 "nvme_io_md": false, 00:16:52.424 "write_zeroes": true, 00:16:52.424 "zcopy": true, 00:16:52.424 "get_zone_info": false, 00:16:52.424 "zone_management": false, 00:16:52.424 "zone_append": false, 00:16:52.424 "compare": false, 00:16:52.424 "compare_and_write": false, 00:16:52.424 "abort": true, 00:16:52.424 "seek_hole": false, 00:16:52.424 "seek_data": false, 00:16:52.424 "copy": true, 00:16:52.424 "nvme_iov_md": false 00:16:52.424 }, 00:16:52.424 "memory_domains": [ 00:16:52.424 { 00:16:52.424 "dma_device_id": "system", 00:16:52.424 "dma_device_type": 1 00:16:52.424 }, 00:16:52.424 { 00:16:52.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.424 "dma_device_type": 2 00:16:52.424 } 00:16:52.424 ], 00:16:52.424 "driver_specific": {} 00:16:52.424 } 00:16:52.424 ] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.424 [2024-12-06 13:11:39.358554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.424 [2024-12-06 13:11:39.358901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.424 [2024-12-06 13:11:39.359056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.424 [2024-12-06 13:11:39.361933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.424 [2024-12-06 13:11:39.362019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:52.424 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.425 "name": "Existed_Raid", 00:16:52.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.425 "strip_size_kb": 64, 00:16:52.425 "state": "configuring", 00:16:52.425 "raid_level": "concat", 00:16:52.425 "superblock": false, 00:16:52.425 "num_base_bdevs": 4, 00:16:52.425 "num_base_bdevs_discovered": 3, 00:16:52.425 "num_base_bdevs_operational": 4, 00:16:52.425 "base_bdevs_list": [ 00:16:52.425 { 00:16:52.425 "name": "BaseBdev1", 00:16:52.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.425 "is_configured": false, 00:16:52.425 "data_offset": 0, 00:16:52.425 "data_size": 0 00:16:52.425 }, 00:16:52.425 { 00:16:52.425 "name": "BaseBdev2", 00:16:52.425 "uuid": "24add9ee-aa9b-44e5-9e46-68b5734c1f58", 00:16:52.425 "is_configured": true, 00:16:52.425 "data_offset": 0, 00:16:52.425 "data_size": 65536 00:16:52.425 }, 00:16:52.425 { 00:16:52.425 "name": "BaseBdev3", 00:16:52.425 "uuid": "d281ad12-63ab-483e-b0b0-7df98b6db80f", 00:16:52.425 "is_configured": true, 00:16:52.425 "data_offset": 0, 00:16:52.425 "data_size": 65536 00:16:52.425 }, 00:16:52.425 { 00:16:52.425 "name": "BaseBdev4", 00:16:52.425 "uuid": "0304b2d4-171f-415a-afa4-7f061ad1ff11", 00:16:52.425 "is_configured": true, 00:16:52.425 "data_offset": 0, 00:16:52.425 "data_size": 65536 00:16:52.425 } 00:16:52.425 ] 00:16:52.425 }' 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.425 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.989 [2024-12-06 13:11:39.874651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.989 "name": "Existed_Raid", 00:16:52.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.989 "strip_size_kb": 64, 00:16:52.989 "state": "configuring", 00:16:52.989 "raid_level": "concat", 00:16:52.989 "superblock": false, 00:16:52.989 "num_base_bdevs": 4, 00:16:52.989 "num_base_bdevs_discovered": 2, 00:16:52.989 "num_base_bdevs_operational": 4, 00:16:52.989 "base_bdevs_list": [ 00:16:52.989 { 00:16:52.989 "name": "BaseBdev1", 00:16:52.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.989 "is_configured": false, 00:16:52.989 "data_offset": 0, 00:16:52.989 "data_size": 0 00:16:52.989 }, 00:16:52.989 { 00:16:52.989 "name": null, 00:16:52.989 "uuid": "24add9ee-aa9b-44e5-9e46-68b5734c1f58", 00:16:52.989 "is_configured": false, 00:16:52.989 "data_offset": 0, 00:16:52.989 "data_size": 65536 00:16:52.989 }, 00:16:52.989 { 00:16:52.989 "name": "BaseBdev3", 00:16:52.989 "uuid": "d281ad12-63ab-483e-b0b0-7df98b6db80f", 00:16:52.989 "is_configured": true, 00:16:52.989 "data_offset": 0, 00:16:52.989 "data_size": 65536 00:16:52.989 }, 00:16:52.989 { 00:16:52.989 "name": "BaseBdev4", 00:16:52.989 "uuid": "0304b2d4-171f-415a-afa4-7f061ad1ff11", 00:16:52.989 "is_configured": true, 00:16:52.989 "data_offset": 0, 00:16:52.989 "data_size": 65536 00:16:52.989 } 00:16:52.989 ] 00:16:52.989 }' 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.989 13:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.554 [2024-12-06 13:11:40.424426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.554 BaseBdev1 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.554 [ 00:16:53.554 { 00:16:53.554 "name": "BaseBdev1", 00:16:53.554 "aliases": [ 00:16:53.554 "5f0002cc-9817-4403-9593-d2b4601ec491" 00:16:53.554 ], 00:16:53.554 "product_name": "Malloc disk", 00:16:53.554 "block_size": 512, 00:16:53.554 "num_blocks": 65536, 00:16:53.554 "uuid": "5f0002cc-9817-4403-9593-d2b4601ec491", 00:16:53.554 "assigned_rate_limits": { 00:16:53.554 "rw_ios_per_sec": 0, 00:16:53.554 "rw_mbytes_per_sec": 0, 00:16:53.554 "r_mbytes_per_sec": 0, 00:16:53.554 "w_mbytes_per_sec": 0 00:16:53.554 }, 00:16:53.554 "claimed": true, 00:16:53.554 "claim_type": "exclusive_write", 00:16:53.554 "zoned": false, 00:16:53.554 "supported_io_types": { 00:16:53.554 "read": true, 00:16:53.554 "write": true, 00:16:53.554 "unmap": true, 00:16:53.554 "flush": true, 00:16:53.554 "reset": true, 00:16:53.554 "nvme_admin": false, 00:16:53.554 "nvme_io": false, 00:16:53.554 "nvme_io_md": false, 00:16:53.554 "write_zeroes": true, 00:16:53.554 "zcopy": true, 00:16:53.554 "get_zone_info": false, 00:16:53.554 "zone_management": false, 00:16:53.554 "zone_append": false, 00:16:53.554 "compare": false, 00:16:53.554 "compare_and_write": false, 00:16:53.554 "abort": true, 00:16:53.554 "seek_hole": false, 00:16:53.554 "seek_data": false, 00:16:53.554 "copy": true, 00:16:53.554 "nvme_iov_md": false 00:16:53.554 }, 00:16:53.554 "memory_domains": [ 00:16:53.554 { 00:16:53.554 "dma_device_id": "system", 00:16:53.554 "dma_device_type": 1 00:16:53.554 }, 00:16:53.554 { 00:16:53.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.554 "dma_device_type": 2 00:16:53.554 } 00:16:53.554 ], 00:16:53.554 "driver_specific": {} 00:16:53.554 } 00:16:53.554 ] 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.554 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.554 "name": "Existed_Raid", 00:16:53.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.554 "strip_size_kb": 64, 00:16:53.554 "state": "configuring", 00:16:53.554 "raid_level": "concat", 00:16:53.554 "superblock": false, 00:16:53.554 "num_base_bdevs": 4, 00:16:53.554 "num_base_bdevs_discovered": 3, 00:16:53.554 "num_base_bdevs_operational": 4, 00:16:53.554 "base_bdevs_list": [ 00:16:53.554 { 00:16:53.554 "name": "BaseBdev1", 00:16:53.554 "uuid": "5f0002cc-9817-4403-9593-d2b4601ec491", 00:16:53.554 "is_configured": true, 00:16:53.554 "data_offset": 0, 00:16:53.554 "data_size": 65536 00:16:53.554 }, 00:16:53.554 { 00:16:53.554 "name": null, 00:16:53.554 "uuid": "24add9ee-aa9b-44e5-9e46-68b5734c1f58", 00:16:53.555 "is_configured": false, 00:16:53.555 "data_offset": 0, 00:16:53.555 "data_size": 65536 00:16:53.555 }, 00:16:53.555 { 00:16:53.555 "name": "BaseBdev3", 00:16:53.555 "uuid": "d281ad12-63ab-483e-b0b0-7df98b6db80f", 00:16:53.555 "is_configured": true, 00:16:53.555 "data_offset": 0, 00:16:53.555 "data_size": 65536 00:16:53.555 }, 00:16:53.555 { 00:16:53.555 "name": "BaseBdev4", 00:16:53.555 "uuid": "0304b2d4-171f-415a-afa4-7f061ad1ff11", 00:16:53.555 "is_configured": true, 00:16:53.555 "data_offset": 0, 00:16:53.555 "data_size": 65536 00:16:53.555 } 00:16:53.555 ] 00:16:53.555 }' 00:16:53.555 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.555 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.184 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.184 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.184 13:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.184 13:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.184 [2024-12-06 13:11:41.048777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.184 "name": "Existed_Raid", 00:16:54.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.184 "strip_size_kb": 64, 00:16:54.184 "state": "configuring", 00:16:54.184 "raid_level": "concat", 00:16:54.184 "superblock": false, 00:16:54.184 "num_base_bdevs": 4, 00:16:54.184 "num_base_bdevs_discovered": 2, 00:16:54.184 "num_base_bdevs_operational": 4, 00:16:54.184 "base_bdevs_list": [ 00:16:54.184 { 00:16:54.184 "name": "BaseBdev1", 00:16:54.184 "uuid": "5f0002cc-9817-4403-9593-d2b4601ec491", 00:16:54.184 "is_configured": true, 00:16:54.184 "data_offset": 0, 00:16:54.184 "data_size": 65536 00:16:54.184 }, 00:16:54.184 { 00:16:54.184 "name": null, 00:16:54.184 "uuid": "24add9ee-aa9b-44e5-9e46-68b5734c1f58", 00:16:54.184 "is_configured": false, 00:16:54.184 "data_offset": 0, 00:16:54.184 "data_size": 65536 00:16:54.184 }, 00:16:54.184 { 00:16:54.184 "name": null, 00:16:54.184 "uuid": "d281ad12-63ab-483e-b0b0-7df98b6db80f", 00:16:54.184 "is_configured": false, 00:16:54.184 "data_offset": 0, 00:16:54.184 "data_size": 65536 00:16:54.184 }, 00:16:54.184 { 00:16:54.184 "name": "BaseBdev4", 00:16:54.184 "uuid": "0304b2d4-171f-415a-afa4-7f061ad1ff11", 00:16:54.184 "is_configured": true, 00:16:54.184 "data_offset": 0, 00:16:54.184 "data_size": 65536 00:16:54.184 } 00:16:54.184 ] 00:16:54.184 }' 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.184 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.749 [2024-12-06 13:11:41.612874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.749 "name": "Existed_Raid", 00:16:54.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.749 "strip_size_kb": 64, 00:16:54.749 "state": "configuring", 00:16:54.749 "raid_level": "concat", 00:16:54.749 "superblock": false, 00:16:54.749 "num_base_bdevs": 4, 00:16:54.749 "num_base_bdevs_discovered": 3, 00:16:54.749 "num_base_bdevs_operational": 4, 00:16:54.749 "base_bdevs_list": [ 00:16:54.749 { 00:16:54.749 "name": "BaseBdev1", 00:16:54.749 "uuid": "5f0002cc-9817-4403-9593-d2b4601ec491", 00:16:54.749 "is_configured": true, 00:16:54.749 "data_offset": 0, 00:16:54.749 "data_size": 65536 00:16:54.749 }, 00:16:54.749 { 00:16:54.749 "name": null, 00:16:54.749 "uuid": "24add9ee-aa9b-44e5-9e46-68b5734c1f58", 00:16:54.749 "is_configured": false, 00:16:54.749 "data_offset": 0, 00:16:54.749 "data_size": 65536 00:16:54.749 }, 00:16:54.749 { 00:16:54.749 "name": "BaseBdev3", 00:16:54.749 "uuid": "d281ad12-63ab-483e-b0b0-7df98b6db80f", 00:16:54.749 "is_configured": true, 00:16:54.749 "data_offset": 0, 00:16:54.749 "data_size": 65536 00:16:54.749 }, 00:16:54.749 { 00:16:54.749 "name": "BaseBdev4", 00:16:54.749 "uuid": "0304b2d4-171f-415a-afa4-7f061ad1ff11", 00:16:54.749 "is_configured": true, 00:16:54.749 "data_offset": 0, 00:16:54.749 "data_size": 65536 00:16:54.749 } 00:16:54.749 ] 00:16:54.749 }' 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.749 13:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.314 [2024-12-06 13:11:42.161104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.314 "name": "Existed_Raid", 00:16:55.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.314 "strip_size_kb": 64, 00:16:55.314 "state": "configuring", 00:16:55.314 "raid_level": "concat", 00:16:55.314 "superblock": false, 00:16:55.314 "num_base_bdevs": 4, 00:16:55.314 "num_base_bdevs_discovered": 2, 00:16:55.314 "num_base_bdevs_operational": 4, 00:16:55.314 "base_bdevs_list": [ 00:16:55.314 { 00:16:55.314 "name": null, 00:16:55.314 "uuid": "5f0002cc-9817-4403-9593-d2b4601ec491", 00:16:55.314 "is_configured": false, 00:16:55.314 "data_offset": 0, 00:16:55.314 "data_size": 65536 00:16:55.314 }, 00:16:55.314 { 00:16:55.314 "name": null, 00:16:55.314 "uuid": "24add9ee-aa9b-44e5-9e46-68b5734c1f58", 00:16:55.314 "is_configured": false, 00:16:55.314 "data_offset": 0, 00:16:55.314 "data_size": 65536 00:16:55.314 }, 00:16:55.314 { 00:16:55.314 "name": "BaseBdev3", 00:16:55.314 "uuid": "d281ad12-63ab-483e-b0b0-7df98b6db80f", 00:16:55.314 "is_configured": true, 00:16:55.314 "data_offset": 0, 00:16:55.314 "data_size": 65536 00:16:55.314 }, 00:16:55.314 { 00:16:55.314 "name": "BaseBdev4", 00:16:55.314 "uuid": "0304b2d4-171f-415a-afa4-7f061ad1ff11", 00:16:55.314 "is_configured": true, 00:16:55.314 "data_offset": 0, 00:16:55.314 "data_size": 65536 00:16:55.314 } 00:16:55.314 ] 00:16:55.314 }' 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.314 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.891 [2024-12-06 13:11:42.793080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.891 "name": "Existed_Raid", 00:16:55.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.891 "strip_size_kb": 64, 00:16:55.891 "state": "configuring", 00:16:55.891 "raid_level": "concat", 00:16:55.891 "superblock": false, 00:16:55.891 "num_base_bdevs": 4, 00:16:55.891 "num_base_bdevs_discovered": 3, 00:16:55.891 "num_base_bdevs_operational": 4, 00:16:55.891 "base_bdevs_list": [ 00:16:55.891 { 00:16:55.891 "name": null, 00:16:55.891 "uuid": "5f0002cc-9817-4403-9593-d2b4601ec491", 00:16:55.891 "is_configured": false, 00:16:55.891 "data_offset": 0, 00:16:55.891 "data_size": 65536 00:16:55.891 }, 00:16:55.891 { 00:16:55.891 "name": "BaseBdev2", 00:16:55.891 "uuid": "24add9ee-aa9b-44e5-9e46-68b5734c1f58", 00:16:55.891 "is_configured": true, 00:16:55.891 "data_offset": 0, 00:16:55.891 "data_size": 65536 00:16:55.891 }, 00:16:55.891 { 00:16:55.891 "name": "BaseBdev3", 00:16:55.891 "uuid": "d281ad12-63ab-483e-b0b0-7df98b6db80f", 00:16:55.891 "is_configured": true, 00:16:55.891 "data_offset": 0, 00:16:55.891 "data_size": 65536 00:16:55.891 }, 00:16:55.891 { 00:16:55.891 "name": "BaseBdev4", 00:16:55.891 "uuid": "0304b2d4-171f-415a-afa4-7f061ad1ff11", 00:16:55.891 "is_configured": true, 00:16:55.891 "data_offset": 0, 00:16:55.891 "data_size": 65536 00:16:55.891 } 00:16:55.891 ] 00:16:55.891 }' 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.891 13:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5f0002cc-9817-4403-9593-d2b4601ec491 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.457 [2024-12-06 13:11:43.406318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:56.457 [2024-12-06 13:11:43.406407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:56.457 [2024-12-06 13:11:43.406420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:56.457 [2024-12-06 13:11:43.406842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:56.457 [2024-12-06 13:11:43.407066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:56.457 [2024-12-06 13:11:43.407100] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:56.457 [2024-12-06 13:11:43.407451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.457 NewBaseBdev 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.457 [ 00:16:56.457 { 00:16:56.457 "name": "NewBaseBdev", 00:16:56.457 "aliases": [ 00:16:56.457 "5f0002cc-9817-4403-9593-d2b4601ec491" 00:16:56.457 ], 00:16:56.457 "product_name": "Malloc disk", 00:16:56.457 "block_size": 512, 00:16:56.457 "num_blocks": 65536, 00:16:56.457 "uuid": "5f0002cc-9817-4403-9593-d2b4601ec491", 00:16:56.457 "assigned_rate_limits": { 00:16:56.457 "rw_ios_per_sec": 0, 00:16:56.457 "rw_mbytes_per_sec": 0, 00:16:56.457 "r_mbytes_per_sec": 0, 00:16:56.457 "w_mbytes_per_sec": 0 00:16:56.457 }, 00:16:56.457 "claimed": true, 00:16:56.457 "claim_type": "exclusive_write", 00:16:56.457 "zoned": false, 00:16:56.457 "supported_io_types": { 00:16:56.457 "read": true, 00:16:56.457 "write": true, 00:16:56.457 "unmap": true, 00:16:56.457 "flush": true, 00:16:56.457 "reset": true, 00:16:56.457 "nvme_admin": false, 00:16:56.457 "nvme_io": false, 00:16:56.457 "nvme_io_md": false, 00:16:56.457 "write_zeroes": true, 00:16:56.457 "zcopy": true, 00:16:56.457 "get_zone_info": false, 00:16:56.457 "zone_management": false, 00:16:56.457 "zone_append": false, 00:16:56.457 "compare": false, 00:16:56.457 "compare_and_write": false, 00:16:56.457 "abort": true, 00:16:56.457 "seek_hole": false, 00:16:56.457 "seek_data": false, 00:16:56.457 "copy": true, 00:16:56.457 "nvme_iov_md": false 00:16:56.457 }, 00:16:56.457 "memory_domains": [ 00:16:56.457 { 00:16:56.457 "dma_device_id": "system", 00:16:56.457 "dma_device_type": 1 00:16:56.457 }, 00:16:56.457 { 00:16:56.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.457 "dma_device_type": 2 00:16:56.457 } 00:16:56.457 ], 00:16:56.457 "driver_specific": {} 00:16:56.457 } 00:16:56.457 ] 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.457 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.716 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.716 "name": "Existed_Raid", 00:16:56.716 "uuid": "19f1fdf7-85be-43b2-a51f-1495286de594", 00:16:56.716 "strip_size_kb": 64, 00:16:56.716 "state": "online", 00:16:56.716 "raid_level": "concat", 00:16:56.716 "superblock": false, 00:16:56.716 "num_base_bdevs": 4, 00:16:56.716 "num_base_bdevs_discovered": 4, 00:16:56.717 "num_base_bdevs_operational": 4, 00:16:56.717 "base_bdevs_list": [ 00:16:56.717 { 00:16:56.717 "name": "NewBaseBdev", 00:16:56.717 "uuid": "5f0002cc-9817-4403-9593-d2b4601ec491", 00:16:56.717 "is_configured": true, 00:16:56.717 "data_offset": 0, 00:16:56.717 "data_size": 65536 00:16:56.717 }, 00:16:56.717 { 00:16:56.717 "name": "BaseBdev2", 00:16:56.717 "uuid": "24add9ee-aa9b-44e5-9e46-68b5734c1f58", 00:16:56.717 "is_configured": true, 00:16:56.717 "data_offset": 0, 00:16:56.717 "data_size": 65536 00:16:56.717 }, 00:16:56.717 { 00:16:56.717 "name": "BaseBdev3", 00:16:56.717 "uuid": "d281ad12-63ab-483e-b0b0-7df98b6db80f", 00:16:56.717 "is_configured": true, 00:16:56.717 "data_offset": 0, 00:16:56.717 "data_size": 65536 00:16:56.717 }, 00:16:56.717 { 00:16:56.717 "name": "BaseBdev4", 00:16:56.717 "uuid": "0304b2d4-171f-415a-afa4-7f061ad1ff11", 00:16:56.717 "is_configured": true, 00:16:56.717 "data_offset": 0, 00:16:56.717 "data_size": 65536 00:16:56.717 } 00:16:56.717 ] 00:16:56.717 }' 00:16:56.717 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.717 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.975 [2024-12-06 13:11:43.931080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.975 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:56.975 "name": "Existed_Raid", 00:16:56.975 "aliases": [ 00:16:56.975 "19f1fdf7-85be-43b2-a51f-1495286de594" 00:16:56.975 ], 00:16:56.975 "product_name": "Raid Volume", 00:16:56.975 "block_size": 512, 00:16:56.975 "num_blocks": 262144, 00:16:56.975 "uuid": "19f1fdf7-85be-43b2-a51f-1495286de594", 00:16:56.975 "assigned_rate_limits": { 00:16:56.975 "rw_ios_per_sec": 0, 00:16:56.975 "rw_mbytes_per_sec": 0, 00:16:56.975 "r_mbytes_per_sec": 0, 00:16:56.975 "w_mbytes_per_sec": 0 00:16:56.975 }, 00:16:56.975 "claimed": false, 00:16:56.975 "zoned": false, 00:16:56.975 "supported_io_types": { 00:16:56.975 "read": true, 00:16:56.975 "write": true, 00:16:56.975 "unmap": true, 00:16:56.975 "flush": true, 00:16:56.975 "reset": true, 00:16:56.975 "nvme_admin": false, 00:16:56.975 "nvme_io": false, 00:16:56.975 "nvme_io_md": false, 00:16:56.975 "write_zeroes": true, 00:16:56.975 "zcopy": false, 00:16:56.975 "get_zone_info": false, 00:16:56.975 "zone_management": false, 00:16:56.975 "zone_append": false, 00:16:56.975 "compare": false, 00:16:56.975 "compare_and_write": false, 00:16:56.975 "abort": false, 00:16:56.975 "seek_hole": false, 00:16:56.975 "seek_data": false, 00:16:56.975 "copy": false, 00:16:56.975 "nvme_iov_md": false 00:16:56.975 }, 00:16:56.975 "memory_domains": [ 00:16:56.975 { 00:16:56.975 "dma_device_id": "system", 00:16:56.975 "dma_device_type": 1 00:16:56.975 }, 00:16:56.975 { 00:16:56.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.975 "dma_device_type": 2 00:16:56.975 }, 00:16:56.975 { 00:16:56.975 "dma_device_id": "system", 00:16:56.975 "dma_device_type": 1 00:16:56.975 }, 00:16:56.975 { 00:16:56.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.976 "dma_device_type": 2 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "dma_device_id": "system", 00:16:56.976 "dma_device_type": 1 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.976 "dma_device_type": 2 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "dma_device_id": "system", 00:16:56.976 "dma_device_type": 1 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.976 "dma_device_type": 2 00:16:56.976 } 00:16:56.976 ], 00:16:56.976 "driver_specific": { 00:16:56.976 "raid": { 00:16:56.976 "uuid": "19f1fdf7-85be-43b2-a51f-1495286de594", 00:16:56.976 "strip_size_kb": 64, 00:16:56.976 "state": "online", 00:16:56.976 "raid_level": "concat", 00:16:56.976 "superblock": false, 00:16:56.976 "num_base_bdevs": 4, 00:16:56.976 "num_base_bdevs_discovered": 4, 00:16:56.976 "num_base_bdevs_operational": 4, 00:16:56.976 "base_bdevs_list": [ 00:16:56.976 { 00:16:56.976 "name": "NewBaseBdev", 00:16:56.976 "uuid": "5f0002cc-9817-4403-9593-d2b4601ec491", 00:16:56.976 "is_configured": true, 00:16:56.976 "data_offset": 0, 00:16:56.976 "data_size": 65536 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "name": "BaseBdev2", 00:16:56.976 "uuid": "24add9ee-aa9b-44e5-9e46-68b5734c1f58", 00:16:56.976 "is_configured": true, 00:16:56.976 "data_offset": 0, 00:16:56.976 "data_size": 65536 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "name": "BaseBdev3", 00:16:56.976 "uuid": "d281ad12-63ab-483e-b0b0-7df98b6db80f", 00:16:56.976 "is_configured": true, 00:16:56.976 "data_offset": 0, 00:16:56.976 "data_size": 65536 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "name": "BaseBdev4", 00:16:56.976 "uuid": "0304b2d4-171f-415a-afa4-7f061ad1ff11", 00:16:56.976 "is_configured": true, 00:16:56.976 "data_offset": 0, 00:16:56.976 "data_size": 65536 00:16:56.976 } 00:16:56.976 ] 00:16:56.976 } 00:16:56.976 } 00:16:56.976 }' 00:16:56.976 13:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:57.234 BaseBdev2 00:16:57.234 BaseBdev3 00:16:57.234 BaseBdev4' 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.234 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.235 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.493 [2024-12-06 13:11:44.290648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:57.493 [2024-12-06 13:11:44.290705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.493 [2024-12-06 13:11:44.290842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.493 [2024-12-06 13:11:44.290976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.493 [2024-12-06 13:11:44.290999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71655 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71655 ']' 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71655 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71655 00:16:57.493 killing process with pid 71655 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71655' 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71655 00:16:57.493 [2024-12-06 13:11:44.328201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.493 13:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71655 00:16:57.752 [2024-12-06 13:11:44.704771] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:59.125 00:16:59.125 real 0m12.739s 00:16:59.125 user 0m20.883s 00:16:59.125 sys 0m1.803s 00:16:59.125 ************************************ 00:16:59.125 END TEST raid_state_function_test 00:16:59.125 ************************************ 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.125 13:11:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:59.125 13:11:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:59.125 13:11:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.125 13:11:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:59.125 ************************************ 00:16:59.125 START TEST raid_state_function_test_sb 00:16:59.125 ************************************ 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72332 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:59.125 Process raid pid: 72332 00:16:59.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72332' 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72332 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72332 ']' 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.125 13:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.125 [2024-12-06 13:11:46.027508] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:16:59.125 [2024-12-06 13:11:46.027684] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.384 [2024-12-06 13:11:46.215309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.384 [2024-12-06 13:11:46.386084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.643 [2024-12-06 13:11:46.633808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.643 [2024-12-06 13:11:46.634194] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.230 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.231 [2024-12-06 13:11:47.033191] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.231 [2024-12-06 13:11:47.033304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.231 [2024-12-06 13:11:47.033323] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.231 [2024-12-06 13:11:47.033348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.231 [2024-12-06 13:11:47.033359] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:00.231 [2024-12-06 13:11:47.033374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:00.231 [2024-12-06 13:11:47.033384] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:00.231 [2024-12-06 13:11:47.033398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.231 "name": "Existed_Raid", 00:17:00.231 "uuid": "762885bb-c1e3-4025-9f29-fd50e4822b1a", 00:17:00.231 "strip_size_kb": 64, 00:17:00.231 "state": "configuring", 00:17:00.231 "raid_level": "concat", 00:17:00.231 "superblock": true, 00:17:00.231 "num_base_bdevs": 4, 00:17:00.231 "num_base_bdevs_discovered": 0, 00:17:00.231 "num_base_bdevs_operational": 4, 00:17:00.231 "base_bdevs_list": [ 00:17:00.231 { 00:17:00.231 "name": "BaseBdev1", 00:17:00.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.231 "is_configured": false, 00:17:00.231 "data_offset": 0, 00:17:00.231 "data_size": 0 00:17:00.231 }, 00:17:00.231 { 00:17:00.231 "name": "BaseBdev2", 00:17:00.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.231 "is_configured": false, 00:17:00.231 "data_offset": 0, 00:17:00.231 "data_size": 0 00:17:00.231 }, 00:17:00.231 { 00:17:00.231 "name": "BaseBdev3", 00:17:00.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.231 "is_configured": false, 00:17:00.231 "data_offset": 0, 00:17:00.231 "data_size": 0 00:17:00.231 }, 00:17:00.231 { 00:17:00.231 "name": "BaseBdev4", 00:17:00.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.231 "is_configured": false, 00:17:00.231 "data_offset": 0, 00:17:00.231 "data_size": 0 00:17:00.231 } 00:17:00.231 ] 00:17:00.231 }' 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.231 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.797 [2024-12-06 13:11:47.533251] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.797 [2024-12-06 13:11:47.533335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.797 [2024-12-06 13:11:47.541250] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.797 [2024-12-06 13:11:47.541547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.797 [2024-12-06 13:11:47.541677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.797 [2024-12-06 13:11:47.541821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.797 [2024-12-06 13:11:47.541934] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:00.797 [2024-12-06 13:11:47.542080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:00.797 [2024-12-06 13:11:47.542186] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:00.797 [2024-12-06 13:11:47.542393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.797 [2024-12-06 13:11:47.594141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.797 BaseBdev1 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.797 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.798 [ 00:17:00.798 { 00:17:00.798 "name": "BaseBdev1", 00:17:00.798 "aliases": [ 00:17:00.798 "83a85f24-e5f7-408b-8ef8-1e43af1e80b1" 00:17:00.798 ], 00:17:00.798 "product_name": "Malloc disk", 00:17:00.798 "block_size": 512, 00:17:00.798 "num_blocks": 65536, 00:17:00.798 "uuid": "83a85f24-e5f7-408b-8ef8-1e43af1e80b1", 00:17:00.798 "assigned_rate_limits": { 00:17:00.798 "rw_ios_per_sec": 0, 00:17:00.798 "rw_mbytes_per_sec": 0, 00:17:00.798 "r_mbytes_per_sec": 0, 00:17:00.798 "w_mbytes_per_sec": 0 00:17:00.798 }, 00:17:00.798 "claimed": true, 00:17:00.798 "claim_type": "exclusive_write", 00:17:00.798 "zoned": false, 00:17:00.798 "supported_io_types": { 00:17:00.798 "read": true, 00:17:00.798 "write": true, 00:17:00.798 "unmap": true, 00:17:00.798 "flush": true, 00:17:00.798 "reset": true, 00:17:00.798 "nvme_admin": false, 00:17:00.798 "nvme_io": false, 00:17:00.798 "nvme_io_md": false, 00:17:00.798 "write_zeroes": true, 00:17:00.798 "zcopy": true, 00:17:00.798 "get_zone_info": false, 00:17:00.798 "zone_management": false, 00:17:00.798 "zone_append": false, 00:17:00.798 "compare": false, 00:17:00.798 "compare_and_write": false, 00:17:00.798 "abort": true, 00:17:00.798 "seek_hole": false, 00:17:00.798 "seek_data": false, 00:17:00.798 "copy": true, 00:17:00.798 "nvme_iov_md": false 00:17:00.798 }, 00:17:00.798 "memory_domains": [ 00:17:00.798 { 00:17:00.798 "dma_device_id": "system", 00:17:00.798 "dma_device_type": 1 00:17:00.798 }, 00:17:00.798 { 00:17:00.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.798 "dma_device_type": 2 00:17:00.798 } 00:17:00.798 ], 00:17:00.798 "driver_specific": {} 00:17:00.798 } 00:17:00.798 ] 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.798 "name": "Existed_Raid", 00:17:00.798 "uuid": "d3369e65-aa21-430a-8906-8b62467bc288", 00:17:00.798 "strip_size_kb": 64, 00:17:00.798 "state": "configuring", 00:17:00.798 "raid_level": "concat", 00:17:00.798 "superblock": true, 00:17:00.798 "num_base_bdevs": 4, 00:17:00.798 "num_base_bdevs_discovered": 1, 00:17:00.798 "num_base_bdevs_operational": 4, 00:17:00.798 "base_bdevs_list": [ 00:17:00.798 { 00:17:00.798 "name": "BaseBdev1", 00:17:00.798 "uuid": "83a85f24-e5f7-408b-8ef8-1e43af1e80b1", 00:17:00.798 "is_configured": true, 00:17:00.798 "data_offset": 2048, 00:17:00.798 "data_size": 63488 00:17:00.798 }, 00:17:00.798 { 00:17:00.798 "name": "BaseBdev2", 00:17:00.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.798 "is_configured": false, 00:17:00.798 "data_offset": 0, 00:17:00.798 "data_size": 0 00:17:00.798 }, 00:17:00.798 { 00:17:00.798 "name": "BaseBdev3", 00:17:00.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.798 "is_configured": false, 00:17:00.798 "data_offset": 0, 00:17:00.798 "data_size": 0 00:17:00.798 }, 00:17:00.798 { 00:17:00.798 "name": "BaseBdev4", 00:17:00.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.798 "is_configured": false, 00:17:00.798 "data_offset": 0, 00:17:00.798 "data_size": 0 00:17:00.798 } 00:17:00.798 ] 00:17:00.798 }' 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.798 13:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.363 [2024-12-06 13:11:48.138374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.363 [2024-12-06 13:11:48.138494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.363 [2024-12-06 13:11:48.146518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.363 [2024-12-06 13:11:48.149503] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.363 [2024-12-06 13:11:48.149581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.363 [2024-12-06 13:11:48.149598] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:01.363 [2024-12-06 13:11:48.149616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:01.363 [2024-12-06 13:11:48.149627] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:01.363 [2024-12-06 13:11:48.149642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.363 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.363 "name": "Existed_Raid", 00:17:01.363 "uuid": "f6e028a8-3e5a-483d-88a3-b90a2707ffc4", 00:17:01.363 "strip_size_kb": 64, 00:17:01.363 "state": "configuring", 00:17:01.363 "raid_level": "concat", 00:17:01.363 "superblock": true, 00:17:01.363 "num_base_bdevs": 4, 00:17:01.363 "num_base_bdevs_discovered": 1, 00:17:01.363 "num_base_bdevs_operational": 4, 00:17:01.363 "base_bdevs_list": [ 00:17:01.363 { 00:17:01.363 "name": "BaseBdev1", 00:17:01.363 "uuid": "83a85f24-e5f7-408b-8ef8-1e43af1e80b1", 00:17:01.363 "is_configured": true, 00:17:01.363 "data_offset": 2048, 00:17:01.363 "data_size": 63488 00:17:01.363 }, 00:17:01.363 { 00:17:01.363 "name": "BaseBdev2", 00:17:01.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.363 "is_configured": false, 00:17:01.363 "data_offset": 0, 00:17:01.363 "data_size": 0 00:17:01.363 }, 00:17:01.363 { 00:17:01.363 "name": "BaseBdev3", 00:17:01.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.363 "is_configured": false, 00:17:01.363 "data_offset": 0, 00:17:01.363 "data_size": 0 00:17:01.363 }, 00:17:01.363 { 00:17:01.363 "name": "BaseBdev4", 00:17:01.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.363 "is_configured": false, 00:17:01.364 "data_offset": 0, 00:17:01.364 "data_size": 0 00:17:01.364 } 00:17:01.364 ] 00:17:01.364 }' 00:17:01.364 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.364 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.930 [2024-12-06 13:11:48.741014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.930 BaseBdev2 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.930 [ 00:17:01.930 { 00:17:01.930 "name": "BaseBdev2", 00:17:01.930 "aliases": [ 00:17:01.930 "3ff80b2b-e129-469d-a458-15c8fe1b359b" 00:17:01.930 ], 00:17:01.930 "product_name": "Malloc disk", 00:17:01.930 "block_size": 512, 00:17:01.930 "num_blocks": 65536, 00:17:01.930 "uuid": "3ff80b2b-e129-469d-a458-15c8fe1b359b", 00:17:01.930 "assigned_rate_limits": { 00:17:01.930 "rw_ios_per_sec": 0, 00:17:01.930 "rw_mbytes_per_sec": 0, 00:17:01.930 "r_mbytes_per_sec": 0, 00:17:01.930 "w_mbytes_per_sec": 0 00:17:01.930 }, 00:17:01.930 "claimed": true, 00:17:01.930 "claim_type": "exclusive_write", 00:17:01.930 "zoned": false, 00:17:01.930 "supported_io_types": { 00:17:01.930 "read": true, 00:17:01.930 "write": true, 00:17:01.930 "unmap": true, 00:17:01.930 "flush": true, 00:17:01.930 "reset": true, 00:17:01.930 "nvme_admin": false, 00:17:01.930 "nvme_io": false, 00:17:01.930 "nvme_io_md": false, 00:17:01.930 "write_zeroes": true, 00:17:01.930 "zcopy": true, 00:17:01.930 "get_zone_info": false, 00:17:01.930 "zone_management": false, 00:17:01.930 "zone_append": false, 00:17:01.930 "compare": false, 00:17:01.930 "compare_and_write": false, 00:17:01.930 "abort": true, 00:17:01.930 "seek_hole": false, 00:17:01.930 "seek_data": false, 00:17:01.930 "copy": true, 00:17:01.930 "nvme_iov_md": false 00:17:01.930 }, 00:17:01.930 "memory_domains": [ 00:17:01.930 { 00:17:01.930 "dma_device_id": "system", 00:17:01.930 "dma_device_type": 1 00:17:01.930 }, 00:17:01.930 { 00:17:01.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.930 "dma_device_type": 2 00:17:01.930 } 00:17:01.930 ], 00:17:01.930 "driver_specific": {} 00:17:01.930 } 00:17:01.930 ] 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.930 "name": "Existed_Raid", 00:17:01.930 "uuid": "f6e028a8-3e5a-483d-88a3-b90a2707ffc4", 00:17:01.930 "strip_size_kb": 64, 00:17:01.930 "state": "configuring", 00:17:01.930 "raid_level": "concat", 00:17:01.930 "superblock": true, 00:17:01.930 "num_base_bdevs": 4, 00:17:01.930 "num_base_bdevs_discovered": 2, 00:17:01.930 "num_base_bdevs_operational": 4, 00:17:01.930 "base_bdevs_list": [ 00:17:01.930 { 00:17:01.930 "name": "BaseBdev1", 00:17:01.930 "uuid": "83a85f24-e5f7-408b-8ef8-1e43af1e80b1", 00:17:01.930 "is_configured": true, 00:17:01.930 "data_offset": 2048, 00:17:01.930 "data_size": 63488 00:17:01.930 }, 00:17:01.930 { 00:17:01.930 "name": "BaseBdev2", 00:17:01.930 "uuid": "3ff80b2b-e129-469d-a458-15c8fe1b359b", 00:17:01.930 "is_configured": true, 00:17:01.930 "data_offset": 2048, 00:17:01.930 "data_size": 63488 00:17:01.930 }, 00:17:01.930 { 00:17:01.930 "name": "BaseBdev3", 00:17:01.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.930 "is_configured": false, 00:17:01.930 "data_offset": 0, 00:17:01.930 "data_size": 0 00:17:01.930 }, 00:17:01.930 { 00:17:01.930 "name": "BaseBdev4", 00:17:01.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.930 "is_configured": false, 00:17:01.930 "data_offset": 0, 00:17:01.930 "data_size": 0 00:17:01.930 } 00:17:01.930 ] 00:17:01.930 }' 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.930 13:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.496 [2024-12-06 13:11:49.357660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:02.496 BaseBdev3 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.496 [ 00:17:02.496 { 00:17:02.496 "name": "BaseBdev3", 00:17:02.496 "aliases": [ 00:17:02.496 "0993a1b5-e6c7-4d11-939b-82770475ed4c" 00:17:02.496 ], 00:17:02.496 "product_name": "Malloc disk", 00:17:02.496 "block_size": 512, 00:17:02.496 "num_blocks": 65536, 00:17:02.496 "uuid": "0993a1b5-e6c7-4d11-939b-82770475ed4c", 00:17:02.496 "assigned_rate_limits": { 00:17:02.496 "rw_ios_per_sec": 0, 00:17:02.496 "rw_mbytes_per_sec": 0, 00:17:02.496 "r_mbytes_per_sec": 0, 00:17:02.496 "w_mbytes_per_sec": 0 00:17:02.496 }, 00:17:02.496 "claimed": true, 00:17:02.496 "claim_type": "exclusive_write", 00:17:02.496 "zoned": false, 00:17:02.496 "supported_io_types": { 00:17:02.496 "read": true, 00:17:02.496 "write": true, 00:17:02.496 "unmap": true, 00:17:02.496 "flush": true, 00:17:02.496 "reset": true, 00:17:02.496 "nvme_admin": false, 00:17:02.496 "nvme_io": false, 00:17:02.496 "nvme_io_md": false, 00:17:02.496 "write_zeroes": true, 00:17:02.496 "zcopy": true, 00:17:02.496 "get_zone_info": false, 00:17:02.496 "zone_management": false, 00:17:02.496 "zone_append": false, 00:17:02.496 "compare": false, 00:17:02.496 "compare_and_write": false, 00:17:02.496 "abort": true, 00:17:02.496 "seek_hole": false, 00:17:02.496 "seek_data": false, 00:17:02.496 "copy": true, 00:17:02.496 "nvme_iov_md": false 00:17:02.496 }, 00:17:02.496 "memory_domains": [ 00:17:02.496 { 00:17:02.496 "dma_device_id": "system", 00:17:02.496 "dma_device_type": 1 00:17:02.496 }, 00:17:02.496 { 00:17:02.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.496 "dma_device_type": 2 00:17:02.496 } 00:17:02.496 ], 00:17:02.496 "driver_specific": {} 00:17:02.496 } 00:17:02.496 ] 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.496 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.496 "name": "Existed_Raid", 00:17:02.496 "uuid": "f6e028a8-3e5a-483d-88a3-b90a2707ffc4", 00:17:02.496 "strip_size_kb": 64, 00:17:02.496 "state": "configuring", 00:17:02.496 "raid_level": "concat", 00:17:02.496 "superblock": true, 00:17:02.496 "num_base_bdevs": 4, 00:17:02.496 "num_base_bdevs_discovered": 3, 00:17:02.496 "num_base_bdevs_operational": 4, 00:17:02.496 "base_bdevs_list": [ 00:17:02.496 { 00:17:02.496 "name": "BaseBdev1", 00:17:02.496 "uuid": "83a85f24-e5f7-408b-8ef8-1e43af1e80b1", 00:17:02.496 "is_configured": true, 00:17:02.496 "data_offset": 2048, 00:17:02.496 "data_size": 63488 00:17:02.496 }, 00:17:02.496 { 00:17:02.496 "name": "BaseBdev2", 00:17:02.496 "uuid": "3ff80b2b-e129-469d-a458-15c8fe1b359b", 00:17:02.496 "is_configured": true, 00:17:02.496 "data_offset": 2048, 00:17:02.496 "data_size": 63488 00:17:02.496 }, 00:17:02.496 { 00:17:02.496 "name": "BaseBdev3", 00:17:02.496 "uuid": "0993a1b5-e6c7-4d11-939b-82770475ed4c", 00:17:02.496 "is_configured": true, 00:17:02.496 "data_offset": 2048, 00:17:02.496 "data_size": 63488 00:17:02.496 }, 00:17:02.496 { 00:17:02.496 "name": "BaseBdev4", 00:17:02.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.497 "is_configured": false, 00:17:02.497 "data_offset": 0, 00:17:02.497 "data_size": 0 00:17:02.497 } 00:17:02.497 ] 00:17:02.497 }' 00:17:02.497 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.497 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.064 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:03.064 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.064 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.064 [2024-12-06 13:11:49.955828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.064 [2024-12-06 13:11:49.956543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:03.064 [2024-12-06 13:11:49.956570] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:03.064 BaseBdev4 00:17:03.064 [2024-12-06 13:11:49.956942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:03.064 [2024-12-06 13:11:49.957149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:03.064 [2024-12-06 13:11:49.957181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:03.064 [2024-12-06 13:11:49.957372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.064 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.065 [ 00:17:03.065 { 00:17:03.065 "name": "BaseBdev4", 00:17:03.065 "aliases": [ 00:17:03.065 "80b836e2-bd6a-48ac-bdfe-4c124ab3b992" 00:17:03.065 ], 00:17:03.065 "product_name": "Malloc disk", 00:17:03.065 "block_size": 512, 00:17:03.065 "num_blocks": 65536, 00:17:03.065 "uuid": "80b836e2-bd6a-48ac-bdfe-4c124ab3b992", 00:17:03.065 "assigned_rate_limits": { 00:17:03.065 "rw_ios_per_sec": 0, 00:17:03.065 "rw_mbytes_per_sec": 0, 00:17:03.065 "r_mbytes_per_sec": 0, 00:17:03.065 "w_mbytes_per_sec": 0 00:17:03.065 }, 00:17:03.065 "claimed": true, 00:17:03.065 "claim_type": "exclusive_write", 00:17:03.065 "zoned": false, 00:17:03.065 "supported_io_types": { 00:17:03.065 "read": true, 00:17:03.065 "write": true, 00:17:03.065 "unmap": true, 00:17:03.065 "flush": true, 00:17:03.065 "reset": true, 00:17:03.065 "nvme_admin": false, 00:17:03.065 "nvme_io": false, 00:17:03.065 "nvme_io_md": false, 00:17:03.065 "write_zeroes": true, 00:17:03.065 "zcopy": true, 00:17:03.065 "get_zone_info": false, 00:17:03.065 "zone_management": false, 00:17:03.065 "zone_append": false, 00:17:03.065 "compare": false, 00:17:03.065 "compare_and_write": false, 00:17:03.065 "abort": true, 00:17:03.065 "seek_hole": false, 00:17:03.065 "seek_data": false, 00:17:03.065 "copy": true, 00:17:03.065 "nvme_iov_md": false 00:17:03.065 }, 00:17:03.065 "memory_domains": [ 00:17:03.065 { 00:17:03.065 "dma_device_id": "system", 00:17:03.065 "dma_device_type": 1 00:17:03.065 }, 00:17:03.065 { 00:17:03.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.065 "dma_device_type": 2 00:17:03.065 } 00:17:03.065 ], 00:17:03.065 "driver_specific": {} 00:17:03.065 } 00:17:03.065 ] 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.065 13:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.065 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.065 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.065 "name": "Existed_Raid", 00:17:03.065 "uuid": "f6e028a8-3e5a-483d-88a3-b90a2707ffc4", 00:17:03.065 "strip_size_kb": 64, 00:17:03.065 "state": "online", 00:17:03.065 "raid_level": "concat", 00:17:03.065 "superblock": true, 00:17:03.065 "num_base_bdevs": 4, 00:17:03.065 "num_base_bdevs_discovered": 4, 00:17:03.065 "num_base_bdevs_operational": 4, 00:17:03.065 "base_bdevs_list": [ 00:17:03.065 { 00:17:03.065 "name": "BaseBdev1", 00:17:03.065 "uuid": "83a85f24-e5f7-408b-8ef8-1e43af1e80b1", 00:17:03.065 "is_configured": true, 00:17:03.065 "data_offset": 2048, 00:17:03.065 "data_size": 63488 00:17:03.065 }, 00:17:03.065 { 00:17:03.065 "name": "BaseBdev2", 00:17:03.065 "uuid": "3ff80b2b-e129-469d-a458-15c8fe1b359b", 00:17:03.065 "is_configured": true, 00:17:03.065 "data_offset": 2048, 00:17:03.065 "data_size": 63488 00:17:03.065 }, 00:17:03.065 { 00:17:03.065 "name": "BaseBdev3", 00:17:03.065 "uuid": "0993a1b5-e6c7-4d11-939b-82770475ed4c", 00:17:03.065 "is_configured": true, 00:17:03.065 "data_offset": 2048, 00:17:03.065 "data_size": 63488 00:17:03.065 }, 00:17:03.065 { 00:17:03.065 "name": "BaseBdev4", 00:17:03.065 "uuid": "80b836e2-bd6a-48ac-bdfe-4c124ab3b992", 00:17:03.065 "is_configured": true, 00:17:03.065 "data_offset": 2048, 00:17:03.065 "data_size": 63488 00:17:03.065 } 00:17:03.065 ] 00:17:03.065 }' 00:17:03.065 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.065 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.631 [2024-12-06 13:11:50.488545] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.631 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.631 "name": "Existed_Raid", 00:17:03.631 "aliases": [ 00:17:03.631 "f6e028a8-3e5a-483d-88a3-b90a2707ffc4" 00:17:03.631 ], 00:17:03.631 "product_name": "Raid Volume", 00:17:03.631 "block_size": 512, 00:17:03.631 "num_blocks": 253952, 00:17:03.631 "uuid": "f6e028a8-3e5a-483d-88a3-b90a2707ffc4", 00:17:03.631 "assigned_rate_limits": { 00:17:03.631 "rw_ios_per_sec": 0, 00:17:03.631 "rw_mbytes_per_sec": 0, 00:17:03.631 "r_mbytes_per_sec": 0, 00:17:03.631 "w_mbytes_per_sec": 0 00:17:03.631 }, 00:17:03.631 "claimed": false, 00:17:03.631 "zoned": false, 00:17:03.631 "supported_io_types": { 00:17:03.631 "read": true, 00:17:03.631 "write": true, 00:17:03.631 "unmap": true, 00:17:03.631 "flush": true, 00:17:03.631 "reset": true, 00:17:03.631 "nvme_admin": false, 00:17:03.631 "nvme_io": false, 00:17:03.631 "nvme_io_md": false, 00:17:03.631 "write_zeroes": true, 00:17:03.631 "zcopy": false, 00:17:03.631 "get_zone_info": false, 00:17:03.631 "zone_management": false, 00:17:03.631 "zone_append": false, 00:17:03.631 "compare": false, 00:17:03.631 "compare_and_write": false, 00:17:03.631 "abort": false, 00:17:03.631 "seek_hole": false, 00:17:03.631 "seek_data": false, 00:17:03.631 "copy": false, 00:17:03.631 "nvme_iov_md": false 00:17:03.631 }, 00:17:03.631 "memory_domains": [ 00:17:03.631 { 00:17:03.631 "dma_device_id": "system", 00:17:03.631 "dma_device_type": 1 00:17:03.631 }, 00:17:03.631 { 00:17:03.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.631 "dma_device_type": 2 00:17:03.631 }, 00:17:03.631 { 00:17:03.631 "dma_device_id": "system", 00:17:03.631 "dma_device_type": 1 00:17:03.631 }, 00:17:03.631 { 00:17:03.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.631 "dma_device_type": 2 00:17:03.631 }, 00:17:03.631 { 00:17:03.631 "dma_device_id": "system", 00:17:03.631 "dma_device_type": 1 00:17:03.631 }, 00:17:03.631 { 00:17:03.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.631 "dma_device_type": 2 00:17:03.631 }, 00:17:03.631 { 00:17:03.631 "dma_device_id": "system", 00:17:03.631 "dma_device_type": 1 00:17:03.631 }, 00:17:03.631 { 00:17:03.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.631 "dma_device_type": 2 00:17:03.631 } 00:17:03.632 ], 00:17:03.632 "driver_specific": { 00:17:03.632 "raid": { 00:17:03.632 "uuid": "f6e028a8-3e5a-483d-88a3-b90a2707ffc4", 00:17:03.632 "strip_size_kb": 64, 00:17:03.632 "state": "online", 00:17:03.632 "raid_level": "concat", 00:17:03.632 "superblock": true, 00:17:03.632 "num_base_bdevs": 4, 00:17:03.632 "num_base_bdevs_discovered": 4, 00:17:03.632 "num_base_bdevs_operational": 4, 00:17:03.632 "base_bdevs_list": [ 00:17:03.632 { 00:17:03.632 "name": "BaseBdev1", 00:17:03.632 "uuid": "83a85f24-e5f7-408b-8ef8-1e43af1e80b1", 00:17:03.632 "is_configured": true, 00:17:03.632 "data_offset": 2048, 00:17:03.632 "data_size": 63488 00:17:03.632 }, 00:17:03.632 { 00:17:03.632 "name": "BaseBdev2", 00:17:03.632 "uuid": "3ff80b2b-e129-469d-a458-15c8fe1b359b", 00:17:03.632 "is_configured": true, 00:17:03.632 "data_offset": 2048, 00:17:03.632 "data_size": 63488 00:17:03.632 }, 00:17:03.632 { 00:17:03.632 "name": "BaseBdev3", 00:17:03.632 "uuid": "0993a1b5-e6c7-4d11-939b-82770475ed4c", 00:17:03.632 "is_configured": true, 00:17:03.632 "data_offset": 2048, 00:17:03.632 "data_size": 63488 00:17:03.632 }, 00:17:03.632 { 00:17:03.632 "name": "BaseBdev4", 00:17:03.632 "uuid": "80b836e2-bd6a-48ac-bdfe-4c124ab3b992", 00:17:03.632 "is_configured": true, 00:17:03.632 "data_offset": 2048, 00:17:03.632 "data_size": 63488 00:17:03.632 } 00:17:03.632 ] 00:17:03.632 } 00:17:03.632 } 00:17:03.632 }' 00:17:03.632 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.632 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:03.632 BaseBdev2 00:17:03.632 BaseBdev3 00:17:03.632 BaseBdev4' 00:17:03.632 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.632 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:03.632 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.632 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:03.632 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.632 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.632 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.632 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.908 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.908 [2024-12-06 13:11:50.872254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.908 [2024-12-06 13:11:50.872559] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.908 [2024-12-06 13:11:50.872697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.170 13:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.170 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.170 "name": "Existed_Raid", 00:17:04.170 "uuid": "f6e028a8-3e5a-483d-88a3-b90a2707ffc4", 00:17:04.170 "strip_size_kb": 64, 00:17:04.170 "state": "offline", 00:17:04.170 "raid_level": "concat", 00:17:04.170 "superblock": true, 00:17:04.170 "num_base_bdevs": 4, 00:17:04.170 "num_base_bdevs_discovered": 3, 00:17:04.170 "num_base_bdevs_operational": 3, 00:17:04.170 "base_bdevs_list": [ 00:17:04.170 { 00:17:04.170 "name": null, 00:17:04.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.170 "is_configured": false, 00:17:04.170 "data_offset": 0, 00:17:04.170 "data_size": 63488 00:17:04.170 }, 00:17:04.170 { 00:17:04.170 "name": "BaseBdev2", 00:17:04.170 "uuid": "3ff80b2b-e129-469d-a458-15c8fe1b359b", 00:17:04.170 "is_configured": true, 00:17:04.170 "data_offset": 2048, 00:17:04.170 "data_size": 63488 00:17:04.170 }, 00:17:04.170 { 00:17:04.170 "name": "BaseBdev3", 00:17:04.170 "uuid": "0993a1b5-e6c7-4d11-939b-82770475ed4c", 00:17:04.170 "is_configured": true, 00:17:04.170 "data_offset": 2048, 00:17:04.170 "data_size": 63488 00:17:04.170 }, 00:17:04.170 { 00:17:04.170 "name": "BaseBdev4", 00:17:04.170 "uuid": "80b836e2-bd6a-48ac-bdfe-4c124ab3b992", 00:17:04.170 "is_configured": true, 00:17:04.170 "data_offset": 2048, 00:17:04.170 "data_size": 63488 00:17:04.170 } 00:17:04.170 ] 00:17:04.170 }' 00:17:04.170 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.170 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.736 [2024-12-06 13:11:51.525966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.736 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.736 [2024-12-06 13:11:51.674732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.994 [2024-12-06 13:11:51.831949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:04.994 [2024-12-06 13:11:51.832271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.994 13:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.252 BaseBdev2 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.252 [ 00:17:05.252 { 00:17:05.252 "name": "BaseBdev2", 00:17:05.252 "aliases": [ 00:17:05.252 "c4e97d96-a900-41e0-ba3d-f47def75db87" 00:17:05.252 ], 00:17:05.252 "product_name": "Malloc disk", 00:17:05.252 "block_size": 512, 00:17:05.252 "num_blocks": 65536, 00:17:05.252 "uuid": "c4e97d96-a900-41e0-ba3d-f47def75db87", 00:17:05.252 "assigned_rate_limits": { 00:17:05.252 "rw_ios_per_sec": 0, 00:17:05.252 "rw_mbytes_per_sec": 0, 00:17:05.252 "r_mbytes_per_sec": 0, 00:17:05.252 "w_mbytes_per_sec": 0 00:17:05.252 }, 00:17:05.252 "claimed": false, 00:17:05.252 "zoned": false, 00:17:05.252 "supported_io_types": { 00:17:05.252 "read": true, 00:17:05.252 "write": true, 00:17:05.252 "unmap": true, 00:17:05.252 "flush": true, 00:17:05.252 "reset": true, 00:17:05.252 "nvme_admin": false, 00:17:05.252 "nvme_io": false, 00:17:05.252 "nvme_io_md": false, 00:17:05.252 "write_zeroes": true, 00:17:05.252 "zcopy": true, 00:17:05.252 "get_zone_info": false, 00:17:05.252 "zone_management": false, 00:17:05.252 "zone_append": false, 00:17:05.252 "compare": false, 00:17:05.252 "compare_and_write": false, 00:17:05.252 "abort": true, 00:17:05.252 "seek_hole": false, 00:17:05.252 "seek_data": false, 00:17:05.252 "copy": true, 00:17:05.252 "nvme_iov_md": false 00:17:05.252 }, 00:17:05.252 "memory_domains": [ 00:17:05.252 { 00:17:05.252 "dma_device_id": "system", 00:17:05.252 "dma_device_type": 1 00:17:05.252 }, 00:17:05.252 { 00:17:05.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.252 "dma_device_type": 2 00:17:05.252 } 00:17:05.252 ], 00:17:05.252 "driver_specific": {} 00:17:05.252 } 00:17:05.252 ] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.252 BaseBdev3 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.252 [ 00:17:05.252 { 00:17:05.252 "name": "BaseBdev3", 00:17:05.252 "aliases": [ 00:17:05.252 "6a3a5453-dc7c-40d5-8740-462b3e0e8349" 00:17:05.252 ], 00:17:05.252 "product_name": "Malloc disk", 00:17:05.252 "block_size": 512, 00:17:05.252 "num_blocks": 65536, 00:17:05.252 "uuid": "6a3a5453-dc7c-40d5-8740-462b3e0e8349", 00:17:05.252 "assigned_rate_limits": { 00:17:05.252 "rw_ios_per_sec": 0, 00:17:05.252 "rw_mbytes_per_sec": 0, 00:17:05.252 "r_mbytes_per_sec": 0, 00:17:05.252 "w_mbytes_per_sec": 0 00:17:05.252 }, 00:17:05.252 "claimed": false, 00:17:05.252 "zoned": false, 00:17:05.252 "supported_io_types": { 00:17:05.252 "read": true, 00:17:05.252 "write": true, 00:17:05.252 "unmap": true, 00:17:05.252 "flush": true, 00:17:05.252 "reset": true, 00:17:05.252 "nvme_admin": false, 00:17:05.252 "nvme_io": false, 00:17:05.252 "nvme_io_md": false, 00:17:05.252 "write_zeroes": true, 00:17:05.252 "zcopy": true, 00:17:05.252 "get_zone_info": false, 00:17:05.252 "zone_management": false, 00:17:05.252 "zone_append": false, 00:17:05.252 "compare": false, 00:17:05.252 "compare_and_write": false, 00:17:05.252 "abort": true, 00:17:05.252 "seek_hole": false, 00:17:05.252 "seek_data": false, 00:17:05.252 "copy": true, 00:17:05.252 "nvme_iov_md": false 00:17:05.252 }, 00:17:05.252 "memory_domains": [ 00:17:05.252 { 00:17:05.252 "dma_device_id": "system", 00:17:05.252 "dma_device_type": 1 00:17:05.252 }, 00:17:05.252 { 00:17:05.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.252 "dma_device_type": 2 00:17:05.252 } 00:17:05.252 ], 00:17:05.252 "driver_specific": {} 00:17:05.252 } 00:17:05.252 ] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.252 BaseBdev4 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.252 [ 00:17:05.252 { 00:17:05.252 "name": "BaseBdev4", 00:17:05.252 "aliases": [ 00:17:05.252 "f077c9a3-9ba6-49fd-b26d-55b07be8d57e" 00:17:05.252 ], 00:17:05.252 "product_name": "Malloc disk", 00:17:05.252 "block_size": 512, 00:17:05.252 "num_blocks": 65536, 00:17:05.252 "uuid": "f077c9a3-9ba6-49fd-b26d-55b07be8d57e", 00:17:05.252 "assigned_rate_limits": { 00:17:05.252 "rw_ios_per_sec": 0, 00:17:05.252 "rw_mbytes_per_sec": 0, 00:17:05.252 "r_mbytes_per_sec": 0, 00:17:05.252 "w_mbytes_per_sec": 0 00:17:05.252 }, 00:17:05.252 "claimed": false, 00:17:05.252 "zoned": false, 00:17:05.252 "supported_io_types": { 00:17:05.252 "read": true, 00:17:05.252 "write": true, 00:17:05.252 "unmap": true, 00:17:05.252 "flush": true, 00:17:05.252 "reset": true, 00:17:05.252 "nvme_admin": false, 00:17:05.252 "nvme_io": false, 00:17:05.252 "nvme_io_md": false, 00:17:05.252 "write_zeroes": true, 00:17:05.252 "zcopy": true, 00:17:05.252 "get_zone_info": false, 00:17:05.252 "zone_management": false, 00:17:05.252 "zone_append": false, 00:17:05.252 "compare": false, 00:17:05.252 "compare_and_write": false, 00:17:05.252 "abort": true, 00:17:05.252 "seek_hole": false, 00:17:05.252 "seek_data": false, 00:17:05.252 "copy": true, 00:17:05.252 "nvme_iov_md": false 00:17:05.252 }, 00:17:05.252 "memory_domains": [ 00:17:05.252 { 00:17:05.252 "dma_device_id": "system", 00:17:05.252 "dma_device_type": 1 00:17:05.252 }, 00:17:05.252 { 00:17:05.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.252 "dma_device_type": 2 00:17:05.252 } 00:17:05.252 ], 00:17:05.252 "driver_specific": {} 00:17:05.252 } 00:17:05.252 ] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.252 [2024-12-06 13:11:52.234626] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.252 [2024-12-06 13:11:52.234995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.252 [2024-12-06 13:11:52.235062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.252 [2024-12-06 13:11:52.237871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.252 [2024-12-06 13:11:52.237952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.252 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.510 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.510 "name": "Existed_Raid", 00:17:05.510 "uuid": "f9bca820-c8b1-4fd2-a6e3-964dfd5f5c72", 00:17:05.510 "strip_size_kb": 64, 00:17:05.510 "state": "configuring", 00:17:05.510 "raid_level": "concat", 00:17:05.510 "superblock": true, 00:17:05.510 "num_base_bdevs": 4, 00:17:05.510 "num_base_bdevs_discovered": 3, 00:17:05.510 "num_base_bdevs_operational": 4, 00:17:05.510 "base_bdevs_list": [ 00:17:05.510 { 00:17:05.510 "name": "BaseBdev1", 00:17:05.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.510 "is_configured": false, 00:17:05.510 "data_offset": 0, 00:17:05.510 "data_size": 0 00:17:05.510 }, 00:17:05.510 { 00:17:05.510 "name": "BaseBdev2", 00:17:05.510 "uuid": "c4e97d96-a900-41e0-ba3d-f47def75db87", 00:17:05.510 "is_configured": true, 00:17:05.510 "data_offset": 2048, 00:17:05.510 "data_size": 63488 00:17:05.510 }, 00:17:05.510 { 00:17:05.510 "name": "BaseBdev3", 00:17:05.510 "uuid": "6a3a5453-dc7c-40d5-8740-462b3e0e8349", 00:17:05.510 "is_configured": true, 00:17:05.510 "data_offset": 2048, 00:17:05.510 "data_size": 63488 00:17:05.510 }, 00:17:05.510 { 00:17:05.510 "name": "BaseBdev4", 00:17:05.510 "uuid": "f077c9a3-9ba6-49fd-b26d-55b07be8d57e", 00:17:05.510 "is_configured": true, 00:17:05.510 "data_offset": 2048, 00:17:05.510 "data_size": 63488 00:17:05.510 } 00:17:05.510 ] 00:17:05.510 }' 00:17:05.510 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.510 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.768 [2024-12-06 13:11:52.730744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.768 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.026 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.026 "name": "Existed_Raid", 00:17:06.026 "uuid": "f9bca820-c8b1-4fd2-a6e3-964dfd5f5c72", 00:17:06.026 "strip_size_kb": 64, 00:17:06.026 "state": "configuring", 00:17:06.026 "raid_level": "concat", 00:17:06.026 "superblock": true, 00:17:06.026 "num_base_bdevs": 4, 00:17:06.026 "num_base_bdevs_discovered": 2, 00:17:06.026 "num_base_bdevs_operational": 4, 00:17:06.026 "base_bdevs_list": [ 00:17:06.026 { 00:17:06.026 "name": "BaseBdev1", 00:17:06.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.026 "is_configured": false, 00:17:06.026 "data_offset": 0, 00:17:06.026 "data_size": 0 00:17:06.026 }, 00:17:06.026 { 00:17:06.026 "name": null, 00:17:06.026 "uuid": "c4e97d96-a900-41e0-ba3d-f47def75db87", 00:17:06.026 "is_configured": false, 00:17:06.026 "data_offset": 0, 00:17:06.026 "data_size": 63488 00:17:06.026 }, 00:17:06.026 { 00:17:06.026 "name": "BaseBdev3", 00:17:06.026 "uuid": "6a3a5453-dc7c-40d5-8740-462b3e0e8349", 00:17:06.026 "is_configured": true, 00:17:06.026 "data_offset": 2048, 00:17:06.026 "data_size": 63488 00:17:06.026 }, 00:17:06.026 { 00:17:06.026 "name": "BaseBdev4", 00:17:06.026 "uuid": "f077c9a3-9ba6-49fd-b26d-55b07be8d57e", 00:17:06.026 "is_configured": true, 00:17:06.026 "data_offset": 2048, 00:17:06.026 "data_size": 63488 00:17:06.026 } 00:17:06.026 ] 00:17:06.026 }' 00:17:06.026 13:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.026 13:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.285 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:06.285 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.285 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.285 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.285 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.543 [2024-12-06 13:11:53.368566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.543 BaseBdev1 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.543 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.543 [ 00:17:06.543 { 00:17:06.543 "name": "BaseBdev1", 00:17:06.543 "aliases": [ 00:17:06.543 "d849ca64-ced9-4abf-a0cf-b8267c66d693" 00:17:06.543 ], 00:17:06.543 "product_name": "Malloc disk", 00:17:06.543 "block_size": 512, 00:17:06.543 "num_blocks": 65536, 00:17:06.543 "uuid": "d849ca64-ced9-4abf-a0cf-b8267c66d693", 00:17:06.544 "assigned_rate_limits": { 00:17:06.544 "rw_ios_per_sec": 0, 00:17:06.544 "rw_mbytes_per_sec": 0, 00:17:06.544 "r_mbytes_per_sec": 0, 00:17:06.544 "w_mbytes_per_sec": 0 00:17:06.544 }, 00:17:06.544 "claimed": true, 00:17:06.544 "claim_type": "exclusive_write", 00:17:06.544 "zoned": false, 00:17:06.544 "supported_io_types": { 00:17:06.544 "read": true, 00:17:06.544 "write": true, 00:17:06.544 "unmap": true, 00:17:06.544 "flush": true, 00:17:06.544 "reset": true, 00:17:06.544 "nvme_admin": false, 00:17:06.544 "nvme_io": false, 00:17:06.544 "nvme_io_md": false, 00:17:06.544 "write_zeroes": true, 00:17:06.544 "zcopy": true, 00:17:06.544 "get_zone_info": false, 00:17:06.544 "zone_management": false, 00:17:06.544 "zone_append": false, 00:17:06.544 "compare": false, 00:17:06.544 "compare_and_write": false, 00:17:06.544 "abort": true, 00:17:06.544 "seek_hole": false, 00:17:06.544 "seek_data": false, 00:17:06.544 "copy": true, 00:17:06.544 "nvme_iov_md": false 00:17:06.544 }, 00:17:06.544 "memory_domains": [ 00:17:06.544 { 00:17:06.544 "dma_device_id": "system", 00:17:06.544 "dma_device_type": 1 00:17:06.544 }, 00:17:06.544 { 00:17:06.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.544 "dma_device_type": 2 00:17:06.544 } 00:17:06.544 ], 00:17:06.544 "driver_specific": {} 00:17:06.544 } 00:17:06.544 ] 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.544 "name": "Existed_Raid", 00:17:06.544 "uuid": "f9bca820-c8b1-4fd2-a6e3-964dfd5f5c72", 00:17:06.544 "strip_size_kb": 64, 00:17:06.544 "state": "configuring", 00:17:06.544 "raid_level": "concat", 00:17:06.544 "superblock": true, 00:17:06.544 "num_base_bdevs": 4, 00:17:06.544 "num_base_bdevs_discovered": 3, 00:17:06.544 "num_base_bdevs_operational": 4, 00:17:06.544 "base_bdevs_list": [ 00:17:06.544 { 00:17:06.544 "name": "BaseBdev1", 00:17:06.544 "uuid": "d849ca64-ced9-4abf-a0cf-b8267c66d693", 00:17:06.544 "is_configured": true, 00:17:06.544 "data_offset": 2048, 00:17:06.544 "data_size": 63488 00:17:06.544 }, 00:17:06.544 { 00:17:06.544 "name": null, 00:17:06.544 "uuid": "c4e97d96-a900-41e0-ba3d-f47def75db87", 00:17:06.544 "is_configured": false, 00:17:06.544 "data_offset": 0, 00:17:06.544 "data_size": 63488 00:17:06.544 }, 00:17:06.544 { 00:17:06.544 "name": "BaseBdev3", 00:17:06.544 "uuid": "6a3a5453-dc7c-40d5-8740-462b3e0e8349", 00:17:06.544 "is_configured": true, 00:17:06.544 "data_offset": 2048, 00:17:06.544 "data_size": 63488 00:17:06.544 }, 00:17:06.544 { 00:17:06.544 "name": "BaseBdev4", 00:17:06.544 "uuid": "f077c9a3-9ba6-49fd-b26d-55b07be8d57e", 00:17:06.544 "is_configured": true, 00:17:06.544 "data_offset": 2048, 00:17:06.544 "data_size": 63488 00:17:06.544 } 00:17:06.544 ] 00:17:06.544 }' 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.544 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.111 [2024-12-06 13:11:53.952864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.111 13:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.112 13:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.112 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.112 "name": "Existed_Raid", 00:17:07.112 "uuid": "f9bca820-c8b1-4fd2-a6e3-964dfd5f5c72", 00:17:07.112 "strip_size_kb": 64, 00:17:07.112 "state": "configuring", 00:17:07.112 "raid_level": "concat", 00:17:07.112 "superblock": true, 00:17:07.112 "num_base_bdevs": 4, 00:17:07.112 "num_base_bdevs_discovered": 2, 00:17:07.112 "num_base_bdevs_operational": 4, 00:17:07.112 "base_bdevs_list": [ 00:17:07.112 { 00:17:07.112 "name": "BaseBdev1", 00:17:07.112 "uuid": "d849ca64-ced9-4abf-a0cf-b8267c66d693", 00:17:07.112 "is_configured": true, 00:17:07.112 "data_offset": 2048, 00:17:07.112 "data_size": 63488 00:17:07.112 }, 00:17:07.112 { 00:17:07.112 "name": null, 00:17:07.112 "uuid": "c4e97d96-a900-41e0-ba3d-f47def75db87", 00:17:07.112 "is_configured": false, 00:17:07.112 "data_offset": 0, 00:17:07.112 "data_size": 63488 00:17:07.112 }, 00:17:07.112 { 00:17:07.112 "name": null, 00:17:07.112 "uuid": "6a3a5453-dc7c-40d5-8740-462b3e0e8349", 00:17:07.112 "is_configured": false, 00:17:07.112 "data_offset": 0, 00:17:07.112 "data_size": 63488 00:17:07.112 }, 00:17:07.112 { 00:17:07.112 "name": "BaseBdev4", 00:17:07.112 "uuid": "f077c9a3-9ba6-49fd-b26d-55b07be8d57e", 00:17:07.112 "is_configured": true, 00:17:07.112 "data_offset": 2048, 00:17:07.112 "data_size": 63488 00:17:07.112 } 00:17:07.112 ] 00:17:07.112 }' 00:17:07.112 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.112 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.678 [2024-12-06 13:11:54.512971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.678 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.678 "name": "Existed_Raid", 00:17:07.678 "uuid": "f9bca820-c8b1-4fd2-a6e3-964dfd5f5c72", 00:17:07.678 "strip_size_kb": 64, 00:17:07.678 "state": "configuring", 00:17:07.678 "raid_level": "concat", 00:17:07.678 "superblock": true, 00:17:07.678 "num_base_bdevs": 4, 00:17:07.678 "num_base_bdevs_discovered": 3, 00:17:07.678 "num_base_bdevs_operational": 4, 00:17:07.678 "base_bdevs_list": [ 00:17:07.678 { 00:17:07.678 "name": "BaseBdev1", 00:17:07.678 "uuid": "d849ca64-ced9-4abf-a0cf-b8267c66d693", 00:17:07.678 "is_configured": true, 00:17:07.678 "data_offset": 2048, 00:17:07.678 "data_size": 63488 00:17:07.678 }, 00:17:07.678 { 00:17:07.678 "name": null, 00:17:07.678 "uuid": "c4e97d96-a900-41e0-ba3d-f47def75db87", 00:17:07.678 "is_configured": false, 00:17:07.678 "data_offset": 0, 00:17:07.678 "data_size": 63488 00:17:07.678 }, 00:17:07.679 { 00:17:07.679 "name": "BaseBdev3", 00:17:07.679 "uuid": "6a3a5453-dc7c-40d5-8740-462b3e0e8349", 00:17:07.679 "is_configured": true, 00:17:07.679 "data_offset": 2048, 00:17:07.679 "data_size": 63488 00:17:07.679 }, 00:17:07.679 { 00:17:07.679 "name": "BaseBdev4", 00:17:07.679 "uuid": "f077c9a3-9ba6-49fd-b26d-55b07be8d57e", 00:17:07.679 "is_configured": true, 00:17:07.679 "data_offset": 2048, 00:17:07.679 "data_size": 63488 00:17:07.679 } 00:17:07.679 ] 00:17:07.679 }' 00:17:07.679 13:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.679 13:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.245 [2024-12-06 13:11:55.057230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.245 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.246 "name": "Existed_Raid", 00:17:08.246 "uuid": "f9bca820-c8b1-4fd2-a6e3-964dfd5f5c72", 00:17:08.246 "strip_size_kb": 64, 00:17:08.246 "state": "configuring", 00:17:08.246 "raid_level": "concat", 00:17:08.246 "superblock": true, 00:17:08.246 "num_base_bdevs": 4, 00:17:08.246 "num_base_bdevs_discovered": 2, 00:17:08.246 "num_base_bdevs_operational": 4, 00:17:08.246 "base_bdevs_list": [ 00:17:08.246 { 00:17:08.246 "name": null, 00:17:08.246 "uuid": "d849ca64-ced9-4abf-a0cf-b8267c66d693", 00:17:08.246 "is_configured": false, 00:17:08.246 "data_offset": 0, 00:17:08.246 "data_size": 63488 00:17:08.246 }, 00:17:08.246 { 00:17:08.246 "name": null, 00:17:08.246 "uuid": "c4e97d96-a900-41e0-ba3d-f47def75db87", 00:17:08.246 "is_configured": false, 00:17:08.246 "data_offset": 0, 00:17:08.246 "data_size": 63488 00:17:08.246 }, 00:17:08.246 { 00:17:08.246 "name": "BaseBdev3", 00:17:08.246 "uuid": "6a3a5453-dc7c-40d5-8740-462b3e0e8349", 00:17:08.246 "is_configured": true, 00:17:08.246 "data_offset": 2048, 00:17:08.246 "data_size": 63488 00:17:08.246 }, 00:17:08.246 { 00:17:08.246 "name": "BaseBdev4", 00:17:08.246 "uuid": "f077c9a3-9ba6-49fd-b26d-55b07be8d57e", 00:17:08.246 "is_configured": true, 00:17:08.246 "data_offset": 2048, 00:17:08.246 "data_size": 63488 00:17:08.246 } 00:17:08.246 ] 00:17:08.246 }' 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.246 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.810 [2024-12-06 13:11:55.749989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.810 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.810 "name": "Existed_Raid", 00:17:08.810 "uuid": "f9bca820-c8b1-4fd2-a6e3-964dfd5f5c72", 00:17:08.810 "strip_size_kb": 64, 00:17:08.810 "state": "configuring", 00:17:08.810 "raid_level": "concat", 00:17:08.810 "superblock": true, 00:17:08.810 "num_base_bdevs": 4, 00:17:08.810 "num_base_bdevs_discovered": 3, 00:17:08.810 "num_base_bdevs_operational": 4, 00:17:08.810 "base_bdevs_list": [ 00:17:08.810 { 00:17:08.810 "name": null, 00:17:08.810 "uuid": "d849ca64-ced9-4abf-a0cf-b8267c66d693", 00:17:08.810 "is_configured": false, 00:17:08.810 "data_offset": 0, 00:17:08.810 "data_size": 63488 00:17:08.810 }, 00:17:08.810 { 00:17:08.810 "name": "BaseBdev2", 00:17:08.810 "uuid": "c4e97d96-a900-41e0-ba3d-f47def75db87", 00:17:08.810 "is_configured": true, 00:17:08.810 "data_offset": 2048, 00:17:08.810 "data_size": 63488 00:17:08.810 }, 00:17:08.810 { 00:17:08.810 "name": "BaseBdev3", 00:17:08.810 "uuid": "6a3a5453-dc7c-40d5-8740-462b3e0e8349", 00:17:08.810 "is_configured": true, 00:17:08.810 "data_offset": 2048, 00:17:08.810 "data_size": 63488 00:17:08.810 }, 00:17:08.810 { 00:17:08.810 "name": "BaseBdev4", 00:17:08.810 "uuid": "f077c9a3-9ba6-49fd-b26d-55b07be8d57e", 00:17:08.810 "is_configured": true, 00:17:08.810 "data_offset": 2048, 00:17:08.810 "data_size": 63488 00:17:08.810 } 00:17:08.810 ] 00:17:08.811 }' 00:17:08.811 13:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.811 13:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.377 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.377 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.377 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:09.377 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.377 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.377 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:09.377 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.377 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:09.377 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.377 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.377 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d849ca64-ced9-4abf-a0cf-b8267c66d693 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.635 [2024-12-06 13:11:56.454204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:09.635 [2024-12-06 13:11:56.454604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:09.635 [2024-12-06 13:11:56.454655] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:09.635 NewBaseBdev 00:17:09.635 [2024-12-06 13:11:56.455039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:09.635 [2024-12-06 13:11:56.455229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:09.635 [2024-12-06 13:11:56.455250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:09.635 [2024-12-06 13:11:56.455452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.635 [ 00:17:09.635 { 00:17:09.635 "name": "NewBaseBdev", 00:17:09.635 "aliases": [ 00:17:09.635 "d849ca64-ced9-4abf-a0cf-b8267c66d693" 00:17:09.635 ], 00:17:09.635 "product_name": "Malloc disk", 00:17:09.635 "block_size": 512, 00:17:09.635 "num_blocks": 65536, 00:17:09.635 "uuid": "d849ca64-ced9-4abf-a0cf-b8267c66d693", 00:17:09.635 "assigned_rate_limits": { 00:17:09.635 "rw_ios_per_sec": 0, 00:17:09.635 "rw_mbytes_per_sec": 0, 00:17:09.635 "r_mbytes_per_sec": 0, 00:17:09.635 "w_mbytes_per_sec": 0 00:17:09.635 }, 00:17:09.635 "claimed": true, 00:17:09.635 "claim_type": "exclusive_write", 00:17:09.635 "zoned": false, 00:17:09.635 "supported_io_types": { 00:17:09.635 "read": true, 00:17:09.635 "write": true, 00:17:09.635 "unmap": true, 00:17:09.635 "flush": true, 00:17:09.635 "reset": true, 00:17:09.635 "nvme_admin": false, 00:17:09.635 "nvme_io": false, 00:17:09.635 "nvme_io_md": false, 00:17:09.635 "write_zeroes": true, 00:17:09.635 "zcopy": true, 00:17:09.635 "get_zone_info": false, 00:17:09.635 "zone_management": false, 00:17:09.635 "zone_append": false, 00:17:09.635 "compare": false, 00:17:09.635 "compare_and_write": false, 00:17:09.635 "abort": true, 00:17:09.635 "seek_hole": false, 00:17:09.635 "seek_data": false, 00:17:09.635 "copy": true, 00:17:09.635 "nvme_iov_md": false 00:17:09.635 }, 00:17:09.635 "memory_domains": [ 00:17:09.635 { 00:17:09.635 "dma_device_id": "system", 00:17:09.635 "dma_device_type": 1 00:17:09.635 }, 00:17:09.635 { 00:17:09.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.635 "dma_device_type": 2 00:17:09.635 } 00:17:09.635 ], 00:17:09.635 "driver_specific": {} 00:17:09.635 } 00:17:09.635 ] 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.635 "name": "Existed_Raid", 00:17:09.635 "uuid": "f9bca820-c8b1-4fd2-a6e3-964dfd5f5c72", 00:17:09.635 "strip_size_kb": 64, 00:17:09.635 "state": "online", 00:17:09.635 "raid_level": "concat", 00:17:09.635 "superblock": true, 00:17:09.635 "num_base_bdevs": 4, 00:17:09.635 "num_base_bdevs_discovered": 4, 00:17:09.635 "num_base_bdevs_operational": 4, 00:17:09.635 "base_bdevs_list": [ 00:17:09.635 { 00:17:09.635 "name": "NewBaseBdev", 00:17:09.635 "uuid": "d849ca64-ced9-4abf-a0cf-b8267c66d693", 00:17:09.635 "is_configured": true, 00:17:09.635 "data_offset": 2048, 00:17:09.635 "data_size": 63488 00:17:09.635 }, 00:17:09.635 { 00:17:09.635 "name": "BaseBdev2", 00:17:09.635 "uuid": "c4e97d96-a900-41e0-ba3d-f47def75db87", 00:17:09.635 "is_configured": true, 00:17:09.635 "data_offset": 2048, 00:17:09.635 "data_size": 63488 00:17:09.635 }, 00:17:09.635 { 00:17:09.635 "name": "BaseBdev3", 00:17:09.635 "uuid": "6a3a5453-dc7c-40d5-8740-462b3e0e8349", 00:17:09.635 "is_configured": true, 00:17:09.635 "data_offset": 2048, 00:17:09.635 "data_size": 63488 00:17:09.635 }, 00:17:09.635 { 00:17:09.635 "name": "BaseBdev4", 00:17:09.635 "uuid": "f077c9a3-9ba6-49fd-b26d-55b07be8d57e", 00:17:09.635 "is_configured": true, 00:17:09.635 "data_offset": 2048, 00:17:09.635 "data_size": 63488 00:17:09.635 } 00:17:09.635 ] 00:17:09.635 }' 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.635 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.202 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:10.202 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:10.202 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:10.202 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:10.202 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:10.202 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:10.202 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:10.202 13:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:10.202 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.202 13:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.202 [2024-12-06 13:11:56.999018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:10.202 "name": "Existed_Raid", 00:17:10.202 "aliases": [ 00:17:10.202 "f9bca820-c8b1-4fd2-a6e3-964dfd5f5c72" 00:17:10.202 ], 00:17:10.202 "product_name": "Raid Volume", 00:17:10.202 "block_size": 512, 00:17:10.202 "num_blocks": 253952, 00:17:10.202 "uuid": "f9bca820-c8b1-4fd2-a6e3-964dfd5f5c72", 00:17:10.202 "assigned_rate_limits": { 00:17:10.202 "rw_ios_per_sec": 0, 00:17:10.202 "rw_mbytes_per_sec": 0, 00:17:10.202 "r_mbytes_per_sec": 0, 00:17:10.202 "w_mbytes_per_sec": 0 00:17:10.202 }, 00:17:10.202 "claimed": false, 00:17:10.202 "zoned": false, 00:17:10.202 "supported_io_types": { 00:17:10.202 "read": true, 00:17:10.202 "write": true, 00:17:10.202 "unmap": true, 00:17:10.202 "flush": true, 00:17:10.202 "reset": true, 00:17:10.202 "nvme_admin": false, 00:17:10.202 "nvme_io": false, 00:17:10.202 "nvme_io_md": false, 00:17:10.202 "write_zeroes": true, 00:17:10.202 "zcopy": false, 00:17:10.202 "get_zone_info": false, 00:17:10.202 "zone_management": false, 00:17:10.202 "zone_append": false, 00:17:10.202 "compare": false, 00:17:10.202 "compare_and_write": false, 00:17:10.202 "abort": false, 00:17:10.202 "seek_hole": false, 00:17:10.202 "seek_data": false, 00:17:10.202 "copy": false, 00:17:10.202 "nvme_iov_md": false 00:17:10.202 }, 00:17:10.202 "memory_domains": [ 00:17:10.202 { 00:17:10.202 "dma_device_id": "system", 00:17:10.202 "dma_device_type": 1 00:17:10.202 }, 00:17:10.202 { 00:17:10.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.202 "dma_device_type": 2 00:17:10.202 }, 00:17:10.202 { 00:17:10.202 "dma_device_id": "system", 00:17:10.202 "dma_device_type": 1 00:17:10.202 }, 00:17:10.202 { 00:17:10.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.202 "dma_device_type": 2 00:17:10.202 }, 00:17:10.202 { 00:17:10.202 "dma_device_id": "system", 00:17:10.202 "dma_device_type": 1 00:17:10.202 }, 00:17:10.202 { 00:17:10.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.202 "dma_device_type": 2 00:17:10.202 }, 00:17:10.202 { 00:17:10.202 "dma_device_id": "system", 00:17:10.202 "dma_device_type": 1 00:17:10.202 }, 00:17:10.202 { 00:17:10.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.202 "dma_device_type": 2 00:17:10.202 } 00:17:10.202 ], 00:17:10.202 "driver_specific": { 00:17:10.202 "raid": { 00:17:10.202 "uuid": "f9bca820-c8b1-4fd2-a6e3-964dfd5f5c72", 00:17:10.202 "strip_size_kb": 64, 00:17:10.202 "state": "online", 00:17:10.202 "raid_level": "concat", 00:17:10.202 "superblock": true, 00:17:10.202 "num_base_bdevs": 4, 00:17:10.202 "num_base_bdevs_discovered": 4, 00:17:10.202 "num_base_bdevs_operational": 4, 00:17:10.202 "base_bdevs_list": [ 00:17:10.202 { 00:17:10.202 "name": "NewBaseBdev", 00:17:10.202 "uuid": "d849ca64-ced9-4abf-a0cf-b8267c66d693", 00:17:10.202 "is_configured": true, 00:17:10.202 "data_offset": 2048, 00:17:10.202 "data_size": 63488 00:17:10.202 }, 00:17:10.202 { 00:17:10.202 "name": "BaseBdev2", 00:17:10.202 "uuid": "c4e97d96-a900-41e0-ba3d-f47def75db87", 00:17:10.202 "is_configured": true, 00:17:10.202 "data_offset": 2048, 00:17:10.202 "data_size": 63488 00:17:10.202 }, 00:17:10.202 { 00:17:10.202 "name": "BaseBdev3", 00:17:10.202 "uuid": "6a3a5453-dc7c-40d5-8740-462b3e0e8349", 00:17:10.202 "is_configured": true, 00:17:10.202 "data_offset": 2048, 00:17:10.202 "data_size": 63488 00:17:10.202 }, 00:17:10.202 { 00:17:10.202 "name": "BaseBdev4", 00:17:10.202 "uuid": "f077c9a3-9ba6-49fd-b26d-55b07be8d57e", 00:17:10.202 "is_configured": true, 00:17:10.202 "data_offset": 2048, 00:17:10.202 "data_size": 63488 00:17:10.202 } 00:17:10.202 ] 00:17:10.202 } 00:17:10.202 } 00:17:10.202 }' 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:10.202 BaseBdev2 00:17:10.202 BaseBdev3 00:17:10.202 BaseBdev4' 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.202 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.460 [2024-12-06 13:11:57.350583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:10.460 [2024-12-06 13:11:57.350623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.460 [2024-12-06 13:11:57.350749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.460 [2024-12-06 13:11:57.350855] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.460 [2024-12-06 13:11:57.350873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72332 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72332 ']' 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72332 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72332 00:17:10.460 killing process with pid 72332 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72332' 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72332 00:17:10.460 [2024-12-06 13:11:57.389414] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.460 13:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72332 00:17:11.024 [2024-12-06 13:11:57.743751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.973 13:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:11.973 00:17:11.973 real 0m12.963s 00:17:11.973 user 0m21.273s 00:17:11.973 sys 0m1.918s 00:17:11.973 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.973 ************************************ 00:17:11.973 END TEST raid_state_function_test_sb 00:17:11.973 ************************************ 00:17:11.973 13:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.973 13:11:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:17:11.973 13:11:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:11.973 13:11:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.973 13:11:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.973 ************************************ 00:17:11.973 START TEST raid_superblock_test 00:17:11.973 ************************************ 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:11.973 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73020 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73020 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73020 ']' 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.974 13:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.246 [2024-12-06 13:11:59.034473] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:17:12.246 [2024-12-06 13:11:59.034905] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73020 ] 00:17:12.246 [2024-12-06 13:11:59.213644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.504 [2024-12-06 13:11:59.363000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.761 [2024-12-06 13:11:59.589413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.761 [2024-12-06 13:11:59.589502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.019 13:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.019 malloc1 00:17:13.019 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.019 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.278 [2024-12-06 13:12:00.039842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:13.278 [2024-12-06 13:12:00.040227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.278 [2024-12-06 13:12:00.040310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:13.278 [2024-12-06 13:12:00.040598] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.278 [2024-12-06 13:12:00.043678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.278 [2024-12-06 13:12:00.043847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:13.278 pt1 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.278 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.279 malloc2 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.279 [2024-12-06 13:12:00.092697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.279 [2024-12-06 13:12:00.092794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.279 [2024-12-06 13:12:00.092829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:13.279 [2024-12-06 13:12:00.092844] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.279 [2024-12-06 13:12:00.095812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.279 [2024-12-06 13:12:00.096131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.279 pt2 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.279 malloc3 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.279 [2024-12-06 13:12:00.164602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:13.279 [2024-12-06 13:12:00.165052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.279 [2024-12-06 13:12:00.165139] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:13.279 [2024-12-06 13:12:00.165408] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.279 [2024-12-06 13:12:00.168839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.279 [2024-12-06 13:12:00.169027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:13.279 pt3 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.279 malloc4 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.279 [2024-12-06 13:12:00.226275] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:13.279 [2024-12-06 13:12:00.226371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.279 [2024-12-06 13:12:00.226404] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:13.279 [2024-12-06 13:12:00.226418] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.279 [2024-12-06 13:12:00.229612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.279 [2024-12-06 13:12:00.229672] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:13.279 pt4 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.279 [2024-12-06 13:12:00.238326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:13.279 [2024-12-06 13:12:00.241009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.279 [2024-12-06 13:12:00.241280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:13.279 [2024-12-06 13:12:00.241395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:13.279 [2024-12-06 13:12:00.241751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:13.279 [2024-12-06 13:12:00.241933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:13.279 [2024-12-06 13:12:00.242329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:13.279 [2024-12-06 13:12:00.242699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:13.279 [2024-12-06 13:12:00.242863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:13.279 [2024-12-06 13:12:00.243326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.279 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.537 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.537 "name": "raid_bdev1", 00:17:13.537 "uuid": "bd9513ea-d39f-4e8b-bb78-6030f867b2a7", 00:17:13.537 "strip_size_kb": 64, 00:17:13.537 "state": "online", 00:17:13.537 "raid_level": "concat", 00:17:13.537 "superblock": true, 00:17:13.537 "num_base_bdevs": 4, 00:17:13.537 "num_base_bdevs_discovered": 4, 00:17:13.537 "num_base_bdevs_operational": 4, 00:17:13.537 "base_bdevs_list": [ 00:17:13.537 { 00:17:13.537 "name": "pt1", 00:17:13.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.537 "is_configured": true, 00:17:13.537 "data_offset": 2048, 00:17:13.537 "data_size": 63488 00:17:13.537 }, 00:17:13.537 { 00:17:13.537 "name": "pt2", 00:17:13.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.537 "is_configured": true, 00:17:13.537 "data_offset": 2048, 00:17:13.537 "data_size": 63488 00:17:13.537 }, 00:17:13.537 { 00:17:13.537 "name": "pt3", 00:17:13.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.537 "is_configured": true, 00:17:13.537 "data_offset": 2048, 00:17:13.537 "data_size": 63488 00:17:13.537 }, 00:17:13.537 { 00:17:13.537 "name": "pt4", 00:17:13.537 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:13.537 "is_configured": true, 00:17:13.537 "data_offset": 2048, 00:17:13.537 "data_size": 63488 00:17:13.537 } 00:17:13.537 ] 00:17:13.537 }' 00:17:13.537 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.537 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.794 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:13.794 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:13.794 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:13.794 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:13.794 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:13.794 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:13.794 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:13.794 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.794 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:13.794 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.794 [2024-12-06 13:12:00.779942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.794 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.053 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:14.053 "name": "raid_bdev1", 00:17:14.053 "aliases": [ 00:17:14.053 "bd9513ea-d39f-4e8b-bb78-6030f867b2a7" 00:17:14.053 ], 00:17:14.053 "product_name": "Raid Volume", 00:17:14.053 "block_size": 512, 00:17:14.053 "num_blocks": 253952, 00:17:14.053 "uuid": "bd9513ea-d39f-4e8b-bb78-6030f867b2a7", 00:17:14.053 "assigned_rate_limits": { 00:17:14.053 "rw_ios_per_sec": 0, 00:17:14.053 "rw_mbytes_per_sec": 0, 00:17:14.053 "r_mbytes_per_sec": 0, 00:17:14.053 "w_mbytes_per_sec": 0 00:17:14.053 }, 00:17:14.053 "claimed": false, 00:17:14.053 "zoned": false, 00:17:14.053 "supported_io_types": { 00:17:14.053 "read": true, 00:17:14.053 "write": true, 00:17:14.053 "unmap": true, 00:17:14.053 "flush": true, 00:17:14.053 "reset": true, 00:17:14.053 "nvme_admin": false, 00:17:14.053 "nvme_io": false, 00:17:14.053 "nvme_io_md": false, 00:17:14.053 "write_zeroes": true, 00:17:14.053 "zcopy": false, 00:17:14.053 "get_zone_info": false, 00:17:14.053 "zone_management": false, 00:17:14.053 "zone_append": false, 00:17:14.053 "compare": false, 00:17:14.053 "compare_and_write": false, 00:17:14.053 "abort": false, 00:17:14.053 "seek_hole": false, 00:17:14.053 "seek_data": false, 00:17:14.053 "copy": false, 00:17:14.053 "nvme_iov_md": false 00:17:14.053 }, 00:17:14.053 "memory_domains": [ 00:17:14.053 { 00:17:14.053 "dma_device_id": "system", 00:17:14.053 "dma_device_type": 1 00:17:14.053 }, 00:17:14.053 { 00:17:14.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.053 "dma_device_type": 2 00:17:14.053 }, 00:17:14.053 { 00:17:14.053 "dma_device_id": "system", 00:17:14.053 "dma_device_type": 1 00:17:14.053 }, 00:17:14.053 { 00:17:14.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.053 "dma_device_type": 2 00:17:14.053 }, 00:17:14.053 { 00:17:14.053 "dma_device_id": "system", 00:17:14.053 "dma_device_type": 1 00:17:14.053 }, 00:17:14.053 { 00:17:14.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.054 "dma_device_type": 2 00:17:14.054 }, 00:17:14.054 { 00:17:14.054 "dma_device_id": "system", 00:17:14.054 "dma_device_type": 1 00:17:14.054 }, 00:17:14.054 { 00:17:14.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.054 "dma_device_type": 2 00:17:14.054 } 00:17:14.054 ], 00:17:14.054 "driver_specific": { 00:17:14.054 "raid": { 00:17:14.054 "uuid": "bd9513ea-d39f-4e8b-bb78-6030f867b2a7", 00:17:14.054 "strip_size_kb": 64, 00:17:14.054 "state": "online", 00:17:14.054 "raid_level": "concat", 00:17:14.054 "superblock": true, 00:17:14.054 "num_base_bdevs": 4, 00:17:14.054 "num_base_bdevs_discovered": 4, 00:17:14.054 "num_base_bdevs_operational": 4, 00:17:14.054 "base_bdevs_list": [ 00:17:14.054 { 00:17:14.054 "name": "pt1", 00:17:14.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.054 "is_configured": true, 00:17:14.054 "data_offset": 2048, 00:17:14.054 "data_size": 63488 00:17:14.054 }, 00:17:14.054 { 00:17:14.054 "name": "pt2", 00:17:14.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.054 "is_configured": true, 00:17:14.054 "data_offset": 2048, 00:17:14.054 "data_size": 63488 00:17:14.054 }, 00:17:14.054 { 00:17:14.054 "name": "pt3", 00:17:14.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.054 "is_configured": true, 00:17:14.054 "data_offset": 2048, 00:17:14.054 "data_size": 63488 00:17:14.054 }, 00:17:14.054 { 00:17:14.054 "name": "pt4", 00:17:14.054 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.054 "is_configured": true, 00:17:14.054 "data_offset": 2048, 00:17:14.054 "data_size": 63488 00:17:14.054 } 00:17:14.054 ] 00:17:14.054 } 00:17:14.054 } 00:17:14.054 }' 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:14.054 pt2 00:17:14.054 pt3 00:17:14.054 pt4' 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.054 13:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.054 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.054 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.054 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.054 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:14.054 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.054 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.054 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.054 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:14.312 [2024-12-06 13:12:01.135968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bd9513ea-d39f-4e8b-bb78-6030f867b2a7 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bd9513ea-d39f-4e8b-bb78-6030f867b2a7 ']' 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.312 [2024-12-06 13:12:01.183592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.312 [2024-12-06 13:12:01.183626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.312 [2024-12-06 13:12:01.183740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.312 [2024-12-06 13:12:01.183868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.312 [2024-12-06 13:12:01.183891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.312 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.313 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.571 [2024-12-06 13:12:01.339720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:14.571 [2024-12-06 13:12:01.342472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:14.571 [2024-12-06 13:12:01.342553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:14.571 [2024-12-06 13:12:01.342626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:14.571 [2024-12-06 13:12:01.342710] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:14.571 [2024-12-06 13:12:01.342831] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:14.571 [2024-12-06 13:12:01.342866] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:14.571 [2024-12-06 13:12:01.342898] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:14.571 [2024-12-06 13:12:01.342920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.571 [2024-12-06 13:12:01.342938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:14.571 request: 00:17:14.571 { 00:17:14.571 "name": "raid_bdev1", 00:17:14.571 "raid_level": "concat", 00:17:14.571 "base_bdevs": [ 00:17:14.571 "malloc1", 00:17:14.571 "malloc2", 00:17:14.571 "malloc3", 00:17:14.571 "malloc4" 00:17:14.571 ], 00:17:14.571 "strip_size_kb": 64, 00:17:14.571 "superblock": false, 00:17:14.571 "method": "bdev_raid_create", 00:17:14.571 "req_id": 1 00:17:14.571 } 00:17:14.571 Got JSON-RPC error response 00:17:14.571 response: 00:17:14.571 { 00:17:14.571 "code": -17, 00:17:14.571 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:14.571 } 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.571 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.572 [2024-12-06 13:12:01.403705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:14.572 [2024-12-06 13:12:01.403808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.572 [2024-12-06 13:12:01.403869] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:14.572 [2024-12-06 13:12:01.403887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.572 [2024-12-06 13:12:01.407211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.572 [2024-12-06 13:12:01.407258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:14.572 [2024-12-06 13:12:01.407365] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:14.572 [2024-12-06 13:12:01.407516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:14.572 pt1 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.572 "name": "raid_bdev1", 00:17:14.572 "uuid": "bd9513ea-d39f-4e8b-bb78-6030f867b2a7", 00:17:14.572 "strip_size_kb": 64, 00:17:14.572 "state": "configuring", 00:17:14.572 "raid_level": "concat", 00:17:14.572 "superblock": true, 00:17:14.572 "num_base_bdevs": 4, 00:17:14.572 "num_base_bdevs_discovered": 1, 00:17:14.572 "num_base_bdevs_operational": 4, 00:17:14.572 "base_bdevs_list": [ 00:17:14.572 { 00:17:14.572 "name": "pt1", 00:17:14.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.572 "is_configured": true, 00:17:14.572 "data_offset": 2048, 00:17:14.572 "data_size": 63488 00:17:14.572 }, 00:17:14.572 { 00:17:14.572 "name": null, 00:17:14.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.572 "is_configured": false, 00:17:14.572 "data_offset": 2048, 00:17:14.572 "data_size": 63488 00:17:14.572 }, 00:17:14.572 { 00:17:14.572 "name": null, 00:17:14.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.572 "is_configured": false, 00:17:14.572 "data_offset": 2048, 00:17:14.572 "data_size": 63488 00:17:14.572 }, 00:17:14.572 { 00:17:14.572 "name": null, 00:17:14.572 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.572 "is_configured": false, 00:17:14.572 "data_offset": 2048, 00:17:14.572 "data_size": 63488 00:17:14.572 } 00:17:14.572 ] 00:17:14.572 }' 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.572 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.140 [2024-12-06 13:12:01.943963] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.140 [2024-12-06 13:12:01.944318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.140 [2024-12-06 13:12:01.944363] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:15.140 [2024-12-06 13:12:01.944383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.140 [2024-12-06 13:12:01.945078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.140 [2024-12-06 13:12:01.945115] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.140 [2024-12-06 13:12:01.945243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:15.140 [2024-12-06 13:12:01.945290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.140 pt2 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.140 [2024-12-06 13:12:01.951917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.140 13:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.140 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.140 "name": "raid_bdev1", 00:17:15.140 "uuid": "bd9513ea-d39f-4e8b-bb78-6030f867b2a7", 00:17:15.140 "strip_size_kb": 64, 00:17:15.140 "state": "configuring", 00:17:15.140 "raid_level": "concat", 00:17:15.140 "superblock": true, 00:17:15.140 "num_base_bdevs": 4, 00:17:15.140 "num_base_bdevs_discovered": 1, 00:17:15.140 "num_base_bdevs_operational": 4, 00:17:15.140 "base_bdevs_list": [ 00:17:15.140 { 00:17:15.140 "name": "pt1", 00:17:15.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.140 "is_configured": true, 00:17:15.140 "data_offset": 2048, 00:17:15.140 "data_size": 63488 00:17:15.140 }, 00:17:15.140 { 00:17:15.140 "name": null, 00:17:15.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.140 "is_configured": false, 00:17:15.140 "data_offset": 0, 00:17:15.140 "data_size": 63488 00:17:15.140 }, 00:17:15.140 { 00:17:15.140 "name": null, 00:17:15.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.140 "is_configured": false, 00:17:15.140 "data_offset": 2048, 00:17:15.140 "data_size": 63488 00:17:15.140 }, 00:17:15.140 { 00:17:15.140 "name": null, 00:17:15.140 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.140 "is_configured": false, 00:17:15.140 "data_offset": 2048, 00:17:15.140 "data_size": 63488 00:17:15.140 } 00:17:15.140 ] 00:17:15.140 }' 00:17:15.140 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.140 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.708 [2024-12-06 13:12:02.476140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.708 [2024-12-06 13:12:02.476243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.708 [2024-12-06 13:12:02.476278] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:15.708 [2024-12-06 13:12:02.476294] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.708 [2024-12-06 13:12:02.476986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.708 [2024-12-06 13:12:02.477019] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.708 [2024-12-06 13:12:02.477134] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:15.708 [2024-12-06 13:12:02.477168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:15.708 pt2 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.708 [2024-12-06 13:12:02.484067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:15.708 [2024-12-06 13:12:02.484122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.708 [2024-12-06 13:12:02.484150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:15.708 [2024-12-06 13:12:02.484174] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.708 [2024-12-06 13:12:02.484651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.708 [2024-12-06 13:12:02.484690] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:15.708 [2024-12-06 13:12:02.484772] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:15.708 [2024-12-06 13:12:02.484823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:15.708 pt3 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.708 [2024-12-06 13:12:02.492046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:15.708 [2024-12-06 13:12:02.492111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.708 [2024-12-06 13:12:02.492154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:15.708 [2024-12-06 13:12:02.492169] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.708 [2024-12-06 13:12:02.492709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.708 [2024-12-06 13:12:02.492749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:15.708 [2024-12-06 13:12:02.492831] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:15.708 [2024-12-06 13:12:02.492864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:15.708 [2024-12-06 13:12:02.493064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:15.708 [2024-12-06 13:12:02.493085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:15.708 [2024-12-06 13:12:02.493392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:15.708 [2024-12-06 13:12:02.493650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:15.708 [2024-12-06 13:12:02.493674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:15.708 [2024-12-06 13:12:02.493860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.708 pt4 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.708 "name": "raid_bdev1", 00:17:15.708 "uuid": "bd9513ea-d39f-4e8b-bb78-6030f867b2a7", 00:17:15.708 "strip_size_kb": 64, 00:17:15.708 "state": "online", 00:17:15.708 "raid_level": "concat", 00:17:15.708 "superblock": true, 00:17:15.708 "num_base_bdevs": 4, 00:17:15.708 "num_base_bdevs_discovered": 4, 00:17:15.708 "num_base_bdevs_operational": 4, 00:17:15.708 "base_bdevs_list": [ 00:17:15.708 { 00:17:15.708 "name": "pt1", 00:17:15.708 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.708 "is_configured": true, 00:17:15.708 "data_offset": 2048, 00:17:15.708 "data_size": 63488 00:17:15.708 }, 00:17:15.708 { 00:17:15.708 "name": "pt2", 00:17:15.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.708 "is_configured": true, 00:17:15.708 "data_offset": 2048, 00:17:15.708 "data_size": 63488 00:17:15.708 }, 00:17:15.708 { 00:17:15.708 "name": "pt3", 00:17:15.708 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.708 "is_configured": true, 00:17:15.708 "data_offset": 2048, 00:17:15.708 "data_size": 63488 00:17:15.708 }, 00:17:15.708 { 00:17:15.708 "name": "pt4", 00:17:15.708 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.708 "is_configured": true, 00:17:15.708 "data_offset": 2048, 00:17:15.708 "data_size": 63488 00:17:15.708 } 00:17:15.708 ] 00:17:15.708 }' 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.708 13:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.275 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:16.275 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.276 [2024-12-06 13:12:03.016792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:16.276 "name": "raid_bdev1", 00:17:16.276 "aliases": [ 00:17:16.276 "bd9513ea-d39f-4e8b-bb78-6030f867b2a7" 00:17:16.276 ], 00:17:16.276 "product_name": "Raid Volume", 00:17:16.276 "block_size": 512, 00:17:16.276 "num_blocks": 253952, 00:17:16.276 "uuid": "bd9513ea-d39f-4e8b-bb78-6030f867b2a7", 00:17:16.276 "assigned_rate_limits": { 00:17:16.276 "rw_ios_per_sec": 0, 00:17:16.276 "rw_mbytes_per_sec": 0, 00:17:16.276 "r_mbytes_per_sec": 0, 00:17:16.276 "w_mbytes_per_sec": 0 00:17:16.276 }, 00:17:16.276 "claimed": false, 00:17:16.276 "zoned": false, 00:17:16.276 "supported_io_types": { 00:17:16.276 "read": true, 00:17:16.276 "write": true, 00:17:16.276 "unmap": true, 00:17:16.276 "flush": true, 00:17:16.276 "reset": true, 00:17:16.276 "nvme_admin": false, 00:17:16.276 "nvme_io": false, 00:17:16.276 "nvme_io_md": false, 00:17:16.276 "write_zeroes": true, 00:17:16.276 "zcopy": false, 00:17:16.276 "get_zone_info": false, 00:17:16.276 "zone_management": false, 00:17:16.276 "zone_append": false, 00:17:16.276 "compare": false, 00:17:16.276 "compare_and_write": false, 00:17:16.276 "abort": false, 00:17:16.276 "seek_hole": false, 00:17:16.276 "seek_data": false, 00:17:16.276 "copy": false, 00:17:16.276 "nvme_iov_md": false 00:17:16.276 }, 00:17:16.276 "memory_domains": [ 00:17:16.276 { 00:17:16.276 "dma_device_id": "system", 00:17:16.276 "dma_device_type": 1 00:17:16.276 }, 00:17:16.276 { 00:17:16.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.276 "dma_device_type": 2 00:17:16.276 }, 00:17:16.276 { 00:17:16.276 "dma_device_id": "system", 00:17:16.276 "dma_device_type": 1 00:17:16.276 }, 00:17:16.276 { 00:17:16.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.276 "dma_device_type": 2 00:17:16.276 }, 00:17:16.276 { 00:17:16.276 "dma_device_id": "system", 00:17:16.276 "dma_device_type": 1 00:17:16.276 }, 00:17:16.276 { 00:17:16.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.276 "dma_device_type": 2 00:17:16.276 }, 00:17:16.276 { 00:17:16.276 "dma_device_id": "system", 00:17:16.276 "dma_device_type": 1 00:17:16.276 }, 00:17:16.276 { 00:17:16.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.276 "dma_device_type": 2 00:17:16.276 } 00:17:16.276 ], 00:17:16.276 "driver_specific": { 00:17:16.276 "raid": { 00:17:16.276 "uuid": "bd9513ea-d39f-4e8b-bb78-6030f867b2a7", 00:17:16.276 "strip_size_kb": 64, 00:17:16.276 "state": "online", 00:17:16.276 "raid_level": "concat", 00:17:16.276 "superblock": true, 00:17:16.276 "num_base_bdevs": 4, 00:17:16.276 "num_base_bdevs_discovered": 4, 00:17:16.276 "num_base_bdevs_operational": 4, 00:17:16.276 "base_bdevs_list": [ 00:17:16.276 { 00:17:16.276 "name": "pt1", 00:17:16.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.276 "is_configured": true, 00:17:16.276 "data_offset": 2048, 00:17:16.276 "data_size": 63488 00:17:16.276 }, 00:17:16.276 { 00:17:16.276 "name": "pt2", 00:17:16.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.276 "is_configured": true, 00:17:16.276 "data_offset": 2048, 00:17:16.276 "data_size": 63488 00:17:16.276 }, 00:17:16.276 { 00:17:16.276 "name": "pt3", 00:17:16.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.276 "is_configured": true, 00:17:16.276 "data_offset": 2048, 00:17:16.276 "data_size": 63488 00:17:16.276 }, 00:17:16.276 { 00:17:16.276 "name": "pt4", 00:17:16.276 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.276 "is_configured": true, 00:17:16.276 "data_offset": 2048, 00:17:16.276 "data_size": 63488 00:17:16.276 } 00:17:16.276 ] 00:17:16.276 } 00:17:16.276 } 00:17:16.276 }' 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:16.276 pt2 00:17:16.276 pt3 00:17:16.276 pt4' 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.276 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.565 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.565 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.565 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.565 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.565 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.565 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.566 [2024-12-06 13:12:03.392999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bd9513ea-d39f-4e8b-bb78-6030f867b2a7 '!=' bd9513ea-d39f-4e8b-bb78-6030f867b2a7 ']' 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73020 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73020 ']' 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73020 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73020 00:17:16.566 killing process with pid 73020 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73020' 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73020 00:17:16.566 [2024-12-06 13:12:03.485164] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.566 13:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73020 00:17:16.566 [2024-12-06 13:12:03.485311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.566 [2024-12-06 13:12:03.485425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.566 [2024-12-06 13:12:03.485441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:17.143 [2024-12-06 13:12:03.868102] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.078 13:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:18.078 00:17:18.078 real 0m6.095s 00:17:18.078 user 0m8.971s 00:17:18.078 sys 0m1.002s 00:17:18.078 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.078 13:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.078 ************************************ 00:17:18.078 END TEST raid_superblock_test 00:17:18.078 ************************************ 00:17:18.078 13:12:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:17:18.078 13:12:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:18.078 13:12:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.078 13:12:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.078 ************************************ 00:17:18.078 START TEST raid_read_error_test 00:17:18.078 ************************************ 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:18.078 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:18.336 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:18.336 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.70N7HbPkMK 00:17:18.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.336 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73285 00:17:18.336 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73285 00:17:18.336 13:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73285 ']' 00:17:18.336 13:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.336 13:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:18.336 13:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.336 13:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.336 13:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.336 13:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.336 [2024-12-06 13:12:05.205018] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:17:18.336 [2024-12-06 13:12:05.205196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73285 ] 00:17:18.593 [2024-12-06 13:12:05.384443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.593 [2024-12-06 13:12:05.537832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.851 [2024-12-06 13:12:05.767213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.851 [2024-12-06 13:12:05.767322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.418 BaseBdev1_malloc 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.418 true 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.418 [2024-12-06 13:12:06.235655] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:19.418 [2024-12-06 13:12:06.235988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.418 [2024-12-06 13:12:06.236032] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:19.418 [2024-12-06 13:12:06.236053] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.418 [2024-12-06 13:12:06.239060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.418 [2024-12-06 13:12:06.239250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:19.418 BaseBdev1 00:17:19.418 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.419 BaseBdev2_malloc 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.419 true 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.419 [2024-12-06 13:12:06.297085] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:19.419 [2024-12-06 13:12:06.297172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.419 [2024-12-06 13:12:06.297200] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:19.419 [2024-12-06 13:12:06.297218] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.419 [2024-12-06 13:12:06.300344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.419 [2024-12-06 13:12:06.300397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:19.419 BaseBdev2 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.419 BaseBdev3_malloc 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.419 true 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.419 [2024-12-06 13:12:06.376290] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:19.419 [2024-12-06 13:12:06.376370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.419 [2024-12-06 13:12:06.376398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:19.419 [2024-12-06 13:12:06.376415] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.419 [2024-12-06 13:12:06.379396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.419 [2024-12-06 13:12:06.379444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:19.419 BaseBdev3 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.419 BaseBdev4_malloc 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.419 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.678 true 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.678 [2024-12-06 13:12:06.437738] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:19.678 [2024-12-06 13:12:06.437835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.678 [2024-12-06 13:12:06.437871] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:19.678 [2024-12-06 13:12:06.437895] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.678 [2024-12-06 13:12:06.440862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.678 [2024-12-06 13:12:06.441119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:19.678 BaseBdev4 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.678 [2024-12-06 13:12:06.449881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.678 [2024-12-06 13:12:06.452529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:19.678 [2024-12-06 13:12:06.452638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:19.678 [2024-12-06 13:12:06.452735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:19.678 [2024-12-06 13:12:06.453045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:19.678 [2024-12-06 13:12:06.453069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:19.678 [2024-12-06 13:12:06.453369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:19.678 [2024-12-06 13:12:06.453625] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:19.678 [2024-12-06 13:12:06.453644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:19.678 [2024-12-06 13:12:06.453917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:19.678 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.679 "name": "raid_bdev1", 00:17:19.679 "uuid": "d84dc7b7-4e02-4187-9c81-dbc912b9d760", 00:17:19.679 "strip_size_kb": 64, 00:17:19.679 "state": "online", 00:17:19.679 "raid_level": "concat", 00:17:19.679 "superblock": true, 00:17:19.679 "num_base_bdevs": 4, 00:17:19.679 "num_base_bdevs_discovered": 4, 00:17:19.679 "num_base_bdevs_operational": 4, 00:17:19.679 "base_bdevs_list": [ 00:17:19.679 { 00:17:19.679 "name": "BaseBdev1", 00:17:19.679 "uuid": "7c17fcb8-9b50-521f-8904-97a6c7fc0c1a", 00:17:19.679 "is_configured": true, 00:17:19.679 "data_offset": 2048, 00:17:19.679 "data_size": 63488 00:17:19.679 }, 00:17:19.679 { 00:17:19.679 "name": "BaseBdev2", 00:17:19.679 "uuid": "ea1e27d4-da3a-5181-85e9-9fdfe247b673", 00:17:19.679 "is_configured": true, 00:17:19.679 "data_offset": 2048, 00:17:19.679 "data_size": 63488 00:17:19.679 }, 00:17:19.679 { 00:17:19.679 "name": "BaseBdev3", 00:17:19.679 "uuid": "06a67790-d16c-5d16-b7e9-3e2456c370c0", 00:17:19.679 "is_configured": true, 00:17:19.679 "data_offset": 2048, 00:17:19.679 "data_size": 63488 00:17:19.679 }, 00:17:19.679 { 00:17:19.679 "name": "BaseBdev4", 00:17:19.679 "uuid": "d5a5412b-297d-5db8-8e76-8ea270b0945e", 00:17:19.679 "is_configured": true, 00:17:19.679 "data_offset": 2048, 00:17:19.679 "data_size": 63488 00:17:19.679 } 00:17:19.679 ] 00:17:19.679 }' 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.679 13:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.255 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:20.255 13:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:20.255 [2024-12-06 13:12:07.079661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.191 13:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.191 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.191 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.191 "name": "raid_bdev1", 00:17:21.191 "uuid": "d84dc7b7-4e02-4187-9c81-dbc912b9d760", 00:17:21.191 "strip_size_kb": 64, 00:17:21.191 "state": "online", 00:17:21.191 "raid_level": "concat", 00:17:21.191 "superblock": true, 00:17:21.191 "num_base_bdevs": 4, 00:17:21.191 "num_base_bdevs_discovered": 4, 00:17:21.191 "num_base_bdevs_operational": 4, 00:17:21.191 "base_bdevs_list": [ 00:17:21.191 { 00:17:21.191 "name": "BaseBdev1", 00:17:21.191 "uuid": "7c17fcb8-9b50-521f-8904-97a6c7fc0c1a", 00:17:21.191 "is_configured": true, 00:17:21.191 "data_offset": 2048, 00:17:21.191 "data_size": 63488 00:17:21.191 }, 00:17:21.191 { 00:17:21.191 "name": "BaseBdev2", 00:17:21.191 "uuid": "ea1e27d4-da3a-5181-85e9-9fdfe247b673", 00:17:21.191 "is_configured": true, 00:17:21.191 "data_offset": 2048, 00:17:21.191 "data_size": 63488 00:17:21.191 }, 00:17:21.191 { 00:17:21.191 "name": "BaseBdev3", 00:17:21.191 "uuid": "06a67790-d16c-5d16-b7e9-3e2456c370c0", 00:17:21.191 "is_configured": true, 00:17:21.191 "data_offset": 2048, 00:17:21.191 "data_size": 63488 00:17:21.191 }, 00:17:21.191 { 00:17:21.191 "name": "BaseBdev4", 00:17:21.191 "uuid": "d5a5412b-297d-5db8-8e76-8ea270b0945e", 00:17:21.191 "is_configured": true, 00:17:21.191 "data_offset": 2048, 00:17:21.191 "data_size": 63488 00:17:21.191 } 00:17:21.191 ] 00:17:21.191 }' 00:17:21.191 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.191 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.759 [2024-12-06 13:12:08.487026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.759 [2024-12-06 13:12:08.487320] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.759 [2024-12-06 13:12:08.490988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.759 [2024-12-06 13:12:08.491274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.759 [2024-12-06 13:12:08.491542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.759 [2024-12-06 13:12:08.491721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:21.759 { 00:17:21.759 "results": [ 00:17:21.759 { 00:17:21.759 "job": "raid_bdev1", 00:17:21.759 "core_mask": "0x1", 00:17:21.759 "workload": "randrw", 00:17:21.759 "percentage": 50, 00:17:21.759 "status": "finished", 00:17:21.759 "queue_depth": 1, 00:17:21.759 "io_size": 131072, 00:17:21.759 "runtime": 1.404772, 00:17:21.759 "iops": 9471.999726646032, 00:17:21.759 "mibps": 1183.999965830754, 00:17:21.759 "io_failed": 1, 00:17:21.759 "io_timeout": 0, 00:17:21.759 "avg_latency_us": 147.92141606946447, 00:17:21.759 "min_latency_us": 40.02909090909091, 00:17:21.759 "max_latency_us": 1809.6872727272728 00:17:21.759 } 00:17:21.759 ], 00:17:21.759 "core_count": 1 00:17:21.759 } 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73285 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73285 ']' 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73285 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73285 00:17:21.759 killing process with pid 73285 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73285' 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73285 00:17:21.759 13:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73285 00:17:21.759 [2024-12-06 13:12:08.527437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.018 [2024-12-06 13:12:08.837623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.393 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.70N7HbPkMK 00:17:23.393 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:23.393 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:23.393 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:17:23.393 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:17:23.393 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:23.393 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:23.393 13:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:17:23.393 00:17:23.393 real 0m4.967s 00:17:23.393 user 0m5.975s 00:17:23.393 sys 0m0.659s 00:17:23.393 13:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.393 ************************************ 00:17:23.393 END TEST raid_read_error_test 00:17:23.393 ************************************ 00:17:23.393 13:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.393 13:12:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:17:23.393 13:12:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:23.393 13:12:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.393 13:12:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.393 ************************************ 00:17:23.393 START TEST raid_write_error_test 00:17:23.393 ************************************ 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:17:23.393 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:17:23.394 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:23.394 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QxFgBD7ksK 00:17:23.394 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73436 00:17:23.394 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73436 00:17:23.394 13:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:23.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.394 13:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73436 ']' 00:17:23.394 13:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.394 13:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.394 13:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.394 13:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.394 13:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.394 [2024-12-06 13:12:10.217945] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:17:23.394 [2024-12-06 13:12:10.218994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73436 ] 00:17:23.394 [2024-12-06 13:12:10.397237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.651 [2024-12-06 13:12:10.544817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.909 [2024-12-06 13:12:10.773324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.909 [2024-12-06 13:12:10.773417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.474 BaseBdev1_malloc 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.474 true 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.474 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.474 [2024-12-06 13:12:11.348462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:24.474 [2024-12-06 13:12:11.348764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.475 [2024-12-06 13:12:11.348829] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:24.475 [2024-12-06 13:12:11.348870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.475 [2024-12-06 13:12:11.352563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.475 [2024-12-06 13:12:11.352844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:24.475 BaseBdev1 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.475 BaseBdev2_malloc 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.475 true 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.475 [2024-12-06 13:12:11.428540] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:24.475 [2024-12-06 13:12:11.428642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.475 [2024-12-06 13:12:11.428688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:24.475 [2024-12-06 13:12:11.428707] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.475 [2024-12-06 13:12:11.432263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.475 [2024-12-06 13:12:11.432518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:24.475 BaseBdev2 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.475 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.750 BaseBdev3_malloc 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.750 true 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.750 [2024-12-06 13:12:11.507934] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:24.750 [2024-12-06 13:12:11.508006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.750 [2024-12-06 13:12:11.508035] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:24.750 [2024-12-06 13:12:11.508053] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.750 [2024-12-06 13:12:11.511361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.750 [2024-12-06 13:12:11.511633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:24.750 BaseBdev3 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.750 BaseBdev4_malloc 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.750 true 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.750 [2024-12-06 13:12:11.574818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:24.750 [2024-12-06 13:12:11.574894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.750 [2024-12-06 13:12:11.574925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:24.750 [2024-12-06 13:12:11.574945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.750 [2024-12-06 13:12:11.578110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.750 [2024-12-06 13:12:11.578175] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:24.750 BaseBdev4 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.750 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.750 [2024-12-06 13:12:11.583131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.750 [2024-12-06 13:12:11.585904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.750 [2024-12-06 13:12:11.586009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.750 [2024-12-06 13:12:11.586108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:24.750 [2024-12-06 13:12:11.586406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:24.751 [2024-12-06 13:12:11.586430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:24.751 [2024-12-06 13:12:11.586844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:24.751 [2024-12-06 13:12:11.587173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:24.751 [2024-12-06 13:12:11.587212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:24.751 [2024-12-06 13:12:11.587574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.751 "name": "raid_bdev1", 00:17:24.751 "uuid": "d9fe5655-1032-4b9d-ad43-0b6460574be8", 00:17:24.751 "strip_size_kb": 64, 00:17:24.751 "state": "online", 00:17:24.751 "raid_level": "concat", 00:17:24.751 "superblock": true, 00:17:24.751 "num_base_bdevs": 4, 00:17:24.751 "num_base_bdevs_discovered": 4, 00:17:24.751 "num_base_bdevs_operational": 4, 00:17:24.751 "base_bdevs_list": [ 00:17:24.751 { 00:17:24.751 "name": "BaseBdev1", 00:17:24.751 "uuid": "3ec07252-9ea6-5ad5-a8bc-c8ee586efea8", 00:17:24.751 "is_configured": true, 00:17:24.751 "data_offset": 2048, 00:17:24.751 "data_size": 63488 00:17:24.751 }, 00:17:24.751 { 00:17:24.751 "name": "BaseBdev2", 00:17:24.751 "uuid": "b4357d0a-6890-5fd3-a77c-96a68e0c9a4e", 00:17:24.751 "is_configured": true, 00:17:24.751 "data_offset": 2048, 00:17:24.751 "data_size": 63488 00:17:24.751 }, 00:17:24.751 { 00:17:24.751 "name": "BaseBdev3", 00:17:24.751 "uuid": "423b59e2-2582-5862-855c-1ab648e8aad5", 00:17:24.751 "is_configured": true, 00:17:24.751 "data_offset": 2048, 00:17:24.751 "data_size": 63488 00:17:24.751 }, 00:17:24.751 { 00:17:24.751 "name": "BaseBdev4", 00:17:24.751 "uuid": "a9401f83-e2f2-557f-8eea-6d7bdfc4ffd2", 00:17:24.751 "is_configured": true, 00:17:24.751 "data_offset": 2048, 00:17:24.751 "data_size": 63488 00:17:24.751 } 00:17:24.751 ] 00:17:24.751 }' 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.751 13:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.318 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:25.318 13:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:25.318 [2024-12-06 13:12:12.269234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.271 "name": "raid_bdev1", 00:17:26.271 "uuid": "d9fe5655-1032-4b9d-ad43-0b6460574be8", 00:17:26.271 "strip_size_kb": 64, 00:17:26.271 "state": "online", 00:17:26.271 "raid_level": "concat", 00:17:26.271 "superblock": true, 00:17:26.271 "num_base_bdevs": 4, 00:17:26.271 "num_base_bdevs_discovered": 4, 00:17:26.271 "num_base_bdevs_operational": 4, 00:17:26.271 "base_bdevs_list": [ 00:17:26.271 { 00:17:26.271 "name": "BaseBdev1", 00:17:26.271 "uuid": "3ec07252-9ea6-5ad5-a8bc-c8ee586efea8", 00:17:26.271 "is_configured": true, 00:17:26.271 "data_offset": 2048, 00:17:26.271 "data_size": 63488 00:17:26.271 }, 00:17:26.271 { 00:17:26.271 "name": "BaseBdev2", 00:17:26.271 "uuid": "b4357d0a-6890-5fd3-a77c-96a68e0c9a4e", 00:17:26.271 "is_configured": true, 00:17:26.271 "data_offset": 2048, 00:17:26.271 "data_size": 63488 00:17:26.271 }, 00:17:26.271 { 00:17:26.271 "name": "BaseBdev3", 00:17:26.271 "uuid": "423b59e2-2582-5862-855c-1ab648e8aad5", 00:17:26.271 "is_configured": true, 00:17:26.271 "data_offset": 2048, 00:17:26.271 "data_size": 63488 00:17:26.271 }, 00:17:26.271 { 00:17:26.271 "name": "BaseBdev4", 00:17:26.271 "uuid": "a9401f83-e2f2-557f-8eea-6d7bdfc4ffd2", 00:17:26.271 "is_configured": true, 00:17:26.271 "data_offset": 2048, 00:17:26.271 "data_size": 63488 00:17:26.271 } 00:17:26.271 ] 00:17:26.271 }' 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.271 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.839 [2024-12-06 13:12:13.680365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:26.839 [2024-12-06 13:12:13.680638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.839 [2024-12-06 13:12:13.684431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.839 { 00:17:26.839 "results": [ 00:17:26.839 { 00:17:26.839 "job": "raid_bdev1", 00:17:26.839 "core_mask": "0x1", 00:17:26.839 "workload": "randrw", 00:17:26.839 "percentage": 50, 00:17:26.839 "status": "finished", 00:17:26.839 "queue_depth": 1, 00:17:26.839 "io_size": 131072, 00:17:26.839 "runtime": 1.408644, 00:17:26.839 "iops": 8978.847742935759, 00:17:26.839 "mibps": 1122.3559678669699, 00:17:26.839 "io_failed": 1, 00:17:26.839 "io_timeout": 0, 00:17:26.839 "avg_latency_us": 155.4387276033319, 00:17:26.839 "min_latency_us": 40.49454545454545, 00:17:26.839 "max_latency_us": 1839.4763636363637 00:17:26.839 } 00:17:26.839 ], 00:17:26.839 "core_count": 1 00:17:26.839 } 00:17:26.839 [2024-12-06 13:12:13.684847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.839 [2024-12-06 13:12:13.684934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.839 [2024-12-06 13:12:13.684962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73436 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73436 ']' 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73436 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73436 00:17:26.839 killing process with pid 73436 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73436' 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73436 00:17:26.839 13:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73436 00:17:26.839 [2024-12-06 13:12:13.720581] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.098 [2024-12-06 13:12:14.027372] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.476 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QxFgBD7ksK 00:17:28.476 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:28.476 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:28.476 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:17:28.476 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:17:28.476 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:28.476 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:28.476 13:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:17:28.476 00:17:28.476 real 0m5.129s 00:17:28.476 user 0m6.294s 00:17:28.476 sys 0m0.685s 00:17:28.476 13:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.476 13:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.476 ************************************ 00:17:28.476 END TEST raid_write_error_test 00:17:28.476 ************************************ 00:17:28.476 13:12:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:17:28.476 13:12:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:28.476 13:12:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:28.476 13:12:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.476 13:12:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.476 ************************************ 00:17:28.476 START TEST raid_state_function_test 00:17:28.476 ************************************ 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:28.476 Process raid pid: 73580 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73580 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73580' 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73580 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73580 ']' 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.476 13:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.476 [2024-12-06 13:12:15.424819] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:17:28.476 [2024-12-06 13:12:15.425016] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.735 [2024-12-06 13:12:15.607936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.993 [2024-12-06 13:12:15.761321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.993 [2024-12-06 13:12:15.991029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.993 [2024-12-06 13:12:15.991382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.559 [2024-12-06 13:12:16.450648] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.559 [2024-12-06 13:12:16.450983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.559 [2024-12-06 13:12:16.451200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.559 [2024-12-06 13:12:16.451269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.559 [2024-12-06 13:12:16.451399] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:29.559 [2024-12-06 13:12:16.451458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:29.559 [2024-12-06 13:12:16.451537] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:29.559 [2024-12-06 13:12:16.451659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.559 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.559 "name": "Existed_Raid", 00:17:29.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.559 "strip_size_kb": 0, 00:17:29.559 "state": "configuring", 00:17:29.559 "raid_level": "raid1", 00:17:29.559 "superblock": false, 00:17:29.559 "num_base_bdevs": 4, 00:17:29.559 "num_base_bdevs_discovered": 0, 00:17:29.559 "num_base_bdevs_operational": 4, 00:17:29.559 "base_bdevs_list": [ 00:17:29.559 { 00:17:29.559 "name": "BaseBdev1", 00:17:29.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.559 "is_configured": false, 00:17:29.559 "data_offset": 0, 00:17:29.559 "data_size": 0 00:17:29.559 }, 00:17:29.559 { 00:17:29.559 "name": "BaseBdev2", 00:17:29.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.560 "is_configured": false, 00:17:29.560 "data_offset": 0, 00:17:29.560 "data_size": 0 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "name": "BaseBdev3", 00:17:29.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.560 "is_configured": false, 00:17:29.560 "data_offset": 0, 00:17:29.560 "data_size": 0 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "name": "BaseBdev4", 00:17:29.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.560 "is_configured": false, 00:17:29.560 "data_offset": 0, 00:17:29.560 "data_size": 0 00:17:29.560 } 00:17:29.560 ] 00:17:29.560 }' 00:17:29.560 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.560 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.128 [2024-12-06 13:12:16.930812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:30.128 [2024-12-06 13:12:16.930869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.128 [2024-12-06 13:12:16.938771] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.128 [2024-12-06 13:12:16.938833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.128 [2024-12-06 13:12:16.938849] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.128 [2024-12-06 13:12:16.938866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.128 [2024-12-06 13:12:16.938876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.128 [2024-12-06 13:12:16.938891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.128 [2024-12-06 13:12:16.938901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:30.128 [2024-12-06 13:12:16.938917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.128 [2024-12-06 13:12:16.988466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.128 BaseBdev1 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.128 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.129 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.129 13:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.129 [ 00:17:30.129 { 00:17:30.129 "name": "BaseBdev1", 00:17:30.129 "aliases": [ 00:17:30.129 "a1899ba2-8149-4504-ab54-dd363fdffe20" 00:17:30.129 ], 00:17:30.129 "product_name": "Malloc disk", 00:17:30.129 "block_size": 512, 00:17:30.129 "num_blocks": 65536, 00:17:30.129 "uuid": "a1899ba2-8149-4504-ab54-dd363fdffe20", 00:17:30.129 "assigned_rate_limits": { 00:17:30.129 "rw_ios_per_sec": 0, 00:17:30.129 "rw_mbytes_per_sec": 0, 00:17:30.129 "r_mbytes_per_sec": 0, 00:17:30.129 "w_mbytes_per_sec": 0 00:17:30.129 }, 00:17:30.129 "claimed": true, 00:17:30.129 "claim_type": "exclusive_write", 00:17:30.129 "zoned": false, 00:17:30.129 "supported_io_types": { 00:17:30.129 "read": true, 00:17:30.129 "write": true, 00:17:30.129 "unmap": true, 00:17:30.129 "flush": true, 00:17:30.129 "reset": true, 00:17:30.129 "nvme_admin": false, 00:17:30.129 "nvme_io": false, 00:17:30.129 "nvme_io_md": false, 00:17:30.129 "write_zeroes": true, 00:17:30.129 "zcopy": true, 00:17:30.129 "get_zone_info": false, 00:17:30.129 "zone_management": false, 00:17:30.129 "zone_append": false, 00:17:30.129 "compare": false, 00:17:30.129 "compare_and_write": false, 00:17:30.129 "abort": true, 00:17:30.129 "seek_hole": false, 00:17:30.129 "seek_data": false, 00:17:30.129 "copy": true, 00:17:30.129 "nvme_iov_md": false 00:17:30.129 }, 00:17:30.129 "memory_domains": [ 00:17:30.129 { 00:17:30.129 "dma_device_id": "system", 00:17:30.129 "dma_device_type": 1 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.129 "dma_device_type": 2 00:17:30.129 } 00:17:30.129 ], 00:17:30.129 "driver_specific": {} 00:17:30.129 } 00:17:30.129 ] 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.129 "name": "Existed_Raid", 00:17:30.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.129 "strip_size_kb": 0, 00:17:30.129 "state": "configuring", 00:17:30.129 "raid_level": "raid1", 00:17:30.129 "superblock": false, 00:17:30.129 "num_base_bdevs": 4, 00:17:30.129 "num_base_bdevs_discovered": 1, 00:17:30.129 "num_base_bdevs_operational": 4, 00:17:30.129 "base_bdevs_list": [ 00:17:30.129 { 00:17:30.129 "name": "BaseBdev1", 00:17:30.129 "uuid": "a1899ba2-8149-4504-ab54-dd363fdffe20", 00:17:30.129 "is_configured": true, 00:17:30.129 "data_offset": 0, 00:17:30.129 "data_size": 65536 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "name": "BaseBdev2", 00:17:30.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.129 "is_configured": false, 00:17:30.129 "data_offset": 0, 00:17:30.129 "data_size": 0 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "name": "BaseBdev3", 00:17:30.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.129 "is_configured": false, 00:17:30.129 "data_offset": 0, 00:17:30.129 "data_size": 0 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "name": "BaseBdev4", 00:17:30.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.129 "is_configured": false, 00:17:30.129 "data_offset": 0, 00:17:30.129 "data_size": 0 00:17:30.129 } 00:17:30.129 ] 00:17:30.129 }' 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.129 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.698 [2024-12-06 13:12:17.532792] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:30.698 [2024-12-06 13:12:17.532868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.698 [2024-12-06 13:12:17.540816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.698 [2024-12-06 13:12:17.543774] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.698 [2024-12-06 13:12:17.543957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.698 [2024-12-06 13:12:17.544088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.698 [2024-12-06 13:12:17.544151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.698 [2024-12-06 13:12:17.544262] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:30.698 [2024-12-06 13:12:17.544321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.698 "name": "Existed_Raid", 00:17:30.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.698 "strip_size_kb": 0, 00:17:30.698 "state": "configuring", 00:17:30.698 "raid_level": "raid1", 00:17:30.698 "superblock": false, 00:17:30.698 "num_base_bdevs": 4, 00:17:30.698 "num_base_bdevs_discovered": 1, 00:17:30.698 "num_base_bdevs_operational": 4, 00:17:30.698 "base_bdevs_list": [ 00:17:30.698 { 00:17:30.698 "name": "BaseBdev1", 00:17:30.698 "uuid": "a1899ba2-8149-4504-ab54-dd363fdffe20", 00:17:30.698 "is_configured": true, 00:17:30.698 "data_offset": 0, 00:17:30.698 "data_size": 65536 00:17:30.698 }, 00:17:30.698 { 00:17:30.698 "name": "BaseBdev2", 00:17:30.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.698 "is_configured": false, 00:17:30.698 "data_offset": 0, 00:17:30.698 "data_size": 0 00:17:30.698 }, 00:17:30.698 { 00:17:30.698 "name": "BaseBdev3", 00:17:30.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.698 "is_configured": false, 00:17:30.698 "data_offset": 0, 00:17:30.698 "data_size": 0 00:17:30.698 }, 00:17:30.698 { 00:17:30.698 "name": "BaseBdev4", 00:17:30.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.698 "is_configured": false, 00:17:30.698 "data_offset": 0, 00:17:30.698 "data_size": 0 00:17:30.698 } 00:17:30.698 ] 00:17:30.698 }' 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.698 13:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.266 [2024-12-06 13:12:18.078293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.266 BaseBdev2 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.266 [ 00:17:31.266 { 00:17:31.266 "name": "BaseBdev2", 00:17:31.266 "aliases": [ 00:17:31.266 "7b411118-a28a-41cb-a2d0-ae92166e0dab" 00:17:31.266 ], 00:17:31.266 "product_name": "Malloc disk", 00:17:31.266 "block_size": 512, 00:17:31.266 "num_blocks": 65536, 00:17:31.266 "uuid": "7b411118-a28a-41cb-a2d0-ae92166e0dab", 00:17:31.266 "assigned_rate_limits": { 00:17:31.266 "rw_ios_per_sec": 0, 00:17:31.266 "rw_mbytes_per_sec": 0, 00:17:31.266 "r_mbytes_per_sec": 0, 00:17:31.266 "w_mbytes_per_sec": 0 00:17:31.266 }, 00:17:31.266 "claimed": true, 00:17:31.266 "claim_type": "exclusive_write", 00:17:31.266 "zoned": false, 00:17:31.266 "supported_io_types": { 00:17:31.266 "read": true, 00:17:31.266 "write": true, 00:17:31.266 "unmap": true, 00:17:31.266 "flush": true, 00:17:31.266 "reset": true, 00:17:31.266 "nvme_admin": false, 00:17:31.266 "nvme_io": false, 00:17:31.266 "nvme_io_md": false, 00:17:31.266 "write_zeroes": true, 00:17:31.266 "zcopy": true, 00:17:31.266 "get_zone_info": false, 00:17:31.266 "zone_management": false, 00:17:31.266 "zone_append": false, 00:17:31.266 "compare": false, 00:17:31.266 "compare_and_write": false, 00:17:31.266 "abort": true, 00:17:31.266 "seek_hole": false, 00:17:31.266 "seek_data": false, 00:17:31.266 "copy": true, 00:17:31.266 "nvme_iov_md": false 00:17:31.266 }, 00:17:31.266 "memory_domains": [ 00:17:31.266 { 00:17:31.266 "dma_device_id": "system", 00:17:31.266 "dma_device_type": 1 00:17:31.266 }, 00:17:31.266 { 00:17:31.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.266 "dma_device_type": 2 00:17:31.266 } 00:17:31.266 ], 00:17:31.266 "driver_specific": {} 00:17:31.266 } 00:17:31.266 ] 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.266 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.267 "name": "Existed_Raid", 00:17:31.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.267 "strip_size_kb": 0, 00:17:31.267 "state": "configuring", 00:17:31.267 "raid_level": "raid1", 00:17:31.267 "superblock": false, 00:17:31.267 "num_base_bdevs": 4, 00:17:31.267 "num_base_bdevs_discovered": 2, 00:17:31.267 "num_base_bdevs_operational": 4, 00:17:31.267 "base_bdevs_list": [ 00:17:31.267 { 00:17:31.267 "name": "BaseBdev1", 00:17:31.267 "uuid": "a1899ba2-8149-4504-ab54-dd363fdffe20", 00:17:31.267 "is_configured": true, 00:17:31.267 "data_offset": 0, 00:17:31.267 "data_size": 65536 00:17:31.267 }, 00:17:31.267 { 00:17:31.267 "name": "BaseBdev2", 00:17:31.267 "uuid": "7b411118-a28a-41cb-a2d0-ae92166e0dab", 00:17:31.267 "is_configured": true, 00:17:31.267 "data_offset": 0, 00:17:31.267 "data_size": 65536 00:17:31.267 }, 00:17:31.267 { 00:17:31.267 "name": "BaseBdev3", 00:17:31.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.267 "is_configured": false, 00:17:31.267 "data_offset": 0, 00:17:31.267 "data_size": 0 00:17:31.267 }, 00:17:31.267 { 00:17:31.267 "name": "BaseBdev4", 00:17:31.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.267 "is_configured": false, 00:17:31.267 "data_offset": 0, 00:17:31.267 "data_size": 0 00:17:31.267 } 00:17:31.267 ] 00:17:31.267 }' 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.267 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.835 [2024-12-06 13:12:18.711683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.835 BaseBdev3 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.835 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.835 [ 00:17:31.835 { 00:17:31.836 "name": "BaseBdev3", 00:17:31.836 "aliases": [ 00:17:31.836 "56fd0a61-e4f6-4d17-8421-ca6a2e0ea864" 00:17:31.836 ], 00:17:31.836 "product_name": "Malloc disk", 00:17:31.836 "block_size": 512, 00:17:31.836 "num_blocks": 65536, 00:17:31.836 "uuid": "56fd0a61-e4f6-4d17-8421-ca6a2e0ea864", 00:17:31.836 "assigned_rate_limits": { 00:17:31.836 "rw_ios_per_sec": 0, 00:17:31.836 "rw_mbytes_per_sec": 0, 00:17:31.836 "r_mbytes_per_sec": 0, 00:17:31.836 "w_mbytes_per_sec": 0 00:17:31.836 }, 00:17:31.836 "claimed": true, 00:17:31.836 "claim_type": "exclusive_write", 00:17:31.836 "zoned": false, 00:17:31.836 "supported_io_types": { 00:17:31.836 "read": true, 00:17:31.836 "write": true, 00:17:31.836 "unmap": true, 00:17:31.836 "flush": true, 00:17:31.836 "reset": true, 00:17:31.836 "nvme_admin": false, 00:17:31.836 "nvme_io": false, 00:17:31.836 "nvme_io_md": false, 00:17:31.836 "write_zeroes": true, 00:17:31.836 "zcopy": true, 00:17:31.836 "get_zone_info": false, 00:17:31.836 "zone_management": false, 00:17:31.836 "zone_append": false, 00:17:31.836 "compare": false, 00:17:31.836 "compare_and_write": false, 00:17:31.836 "abort": true, 00:17:31.836 "seek_hole": false, 00:17:31.836 "seek_data": false, 00:17:31.836 "copy": true, 00:17:31.836 "nvme_iov_md": false 00:17:31.836 }, 00:17:31.836 "memory_domains": [ 00:17:31.836 { 00:17:31.836 "dma_device_id": "system", 00:17:31.836 "dma_device_type": 1 00:17:31.836 }, 00:17:31.836 { 00:17:31.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.836 "dma_device_type": 2 00:17:31.836 } 00:17:31.836 ], 00:17:31.836 "driver_specific": {} 00:17:31.836 } 00:17:31.836 ] 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.836 "name": "Existed_Raid", 00:17:31.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.836 "strip_size_kb": 0, 00:17:31.836 "state": "configuring", 00:17:31.836 "raid_level": "raid1", 00:17:31.836 "superblock": false, 00:17:31.836 "num_base_bdevs": 4, 00:17:31.836 "num_base_bdevs_discovered": 3, 00:17:31.836 "num_base_bdevs_operational": 4, 00:17:31.836 "base_bdevs_list": [ 00:17:31.836 { 00:17:31.836 "name": "BaseBdev1", 00:17:31.836 "uuid": "a1899ba2-8149-4504-ab54-dd363fdffe20", 00:17:31.836 "is_configured": true, 00:17:31.836 "data_offset": 0, 00:17:31.836 "data_size": 65536 00:17:31.836 }, 00:17:31.836 { 00:17:31.836 "name": "BaseBdev2", 00:17:31.836 "uuid": "7b411118-a28a-41cb-a2d0-ae92166e0dab", 00:17:31.836 "is_configured": true, 00:17:31.836 "data_offset": 0, 00:17:31.836 "data_size": 65536 00:17:31.836 }, 00:17:31.836 { 00:17:31.836 "name": "BaseBdev3", 00:17:31.836 "uuid": "56fd0a61-e4f6-4d17-8421-ca6a2e0ea864", 00:17:31.836 "is_configured": true, 00:17:31.836 "data_offset": 0, 00:17:31.836 "data_size": 65536 00:17:31.836 }, 00:17:31.836 { 00:17:31.836 "name": "BaseBdev4", 00:17:31.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.836 "is_configured": false, 00:17:31.836 "data_offset": 0, 00:17:31.836 "data_size": 0 00:17:31.836 } 00:17:31.836 ] 00:17:31.836 }' 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.836 13:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.404 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.405 [2024-12-06 13:12:19.316026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:32.405 [2024-12-06 13:12:19.316386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:32.405 [2024-12-06 13:12:19.316412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:32.405 [2024-12-06 13:12:19.316838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:32.405 [2024-12-06 13:12:19.317142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:32.405 [2024-12-06 13:12:19.317167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:32.405 [2024-12-06 13:12:19.317537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.405 BaseBdev4 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.405 [ 00:17:32.405 { 00:17:32.405 "name": "BaseBdev4", 00:17:32.405 "aliases": [ 00:17:32.405 "33465ce0-1e25-4bb0-95ed-296690c957be" 00:17:32.405 ], 00:17:32.405 "product_name": "Malloc disk", 00:17:32.405 "block_size": 512, 00:17:32.405 "num_blocks": 65536, 00:17:32.405 "uuid": "33465ce0-1e25-4bb0-95ed-296690c957be", 00:17:32.405 "assigned_rate_limits": { 00:17:32.405 "rw_ios_per_sec": 0, 00:17:32.405 "rw_mbytes_per_sec": 0, 00:17:32.405 "r_mbytes_per_sec": 0, 00:17:32.405 "w_mbytes_per_sec": 0 00:17:32.405 }, 00:17:32.405 "claimed": true, 00:17:32.405 "claim_type": "exclusive_write", 00:17:32.405 "zoned": false, 00:17:32.405 "supported_io_types": { 00:17:32.405 "read": true, 00:17:32.405 "write": true, 00:17:32.405 "unmap": true, 00:17:32.405 "flush": true, 00:17:32.405 "reset": true, 00:17:32.405 "nvme_admin": false, 00:17:32.405 "nvme_io": false, 00:17:32.405 "nvme_io_md": false, 00:17:32.405 "write_zeroes": true, 00:17:32.405 "zcopy": true, 00:17:32.405 "get_zone_info": false, 00:17:32.405 "zone_management": false, 00:17:32.405 "zone_append": false, 00:17:32.405 "compare": false, 00:17:32.405 "compare_and_write": false, 00:17:32.405 "abort": true, 00:17:32.405 "seek_hole": false, 00:17:32.405 "seek_data": false, 00:17:32.405 "copy": true, 00:17:32.405 "nvme_iov_md": false 00:17:32.405 }, 00:17:32.405 "memory_domains": [ 00:17:32.405 { 00:17:32.405 "dma_device_id": "system", 00:17:32.405 "dma_device_type": 1 00:17:32.405 }, 00:17:32.405 { 00:17:32.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.405 "dma_device_type": 2 00:17:32.405 } 00:17:32.405 ], 00:17:32.405 "driver_specific": {} 00:17:32.405 } 00:17:32.405 ] 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.405 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.664 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.664 "name": "Existed_Raid", 00:17:32.664 "uuid": "11f87269-3ced-4999-8320-fd0f481c899e", 00:17:32.664 "strip_size_kb": 0, 00:17:32.664 "state": "online", 00:17:32.664 "raid_level": "raid1", 00:17:32.664 "superblock": false, 00:17:32.664 "num_base_bdevs": 4, 00:17:32.664 "num_base_bdevs_discovered": 4, 00:17:32.664 "num_base_bdevs_operational": 4, 00:17:32.664 "base_bdevs_list": [ 00:17:32.664 { 00:17:32.664 "name": "BaseBdev1", 00:17:32.664 "uuid": "a1899ba2-8149-4504-ab54-dd363fdffe20", 00:17:32.664 "is_configured": true, 00:17:32.664 "data_offset": 0, 00:17:32.664 "data_size": 65536 00:17:32.664 }, 00:17:32.664 { 00:17:32.664 "name": "BaseBdev2", 00:17:32.664 "uuid": "7b411118-a28a-41cb-a2d0-ae92166e0dab", 00:17:32.664 "is_configured": true, 00:17:32.664 "data_offset": 0, 00:17:32.664 "data_size": 65536 00:17:32.664 }, 00:17:32.664 { 00:17:32.664 "name": "BaseBdev3", 00:17:32.664 "uuid": "56fd0a61-e4f6-4d17-8421-ca6a2e0ea864", 00:17:32.664 "is_configured": true, 00:17:32.664 "data_offset": 0, 00:17:32.664 "data_size": 65536 00:17:32.664 }, 00:17:32.664 { 00:17:32.664 "name": "BaseBdev4", 00:17:32.664 "uuid": "33465ce0-1e25-4bb0-95ed-296690c957be", 00:17:32.664 "is_configured": true, 00:17:32.664 "data_offset": 0, 00:17:32.664 "data_size": 65536 00:17:32.664 } 00:17:32.664 ] 00:17:32.664 }' 00:17:32.664 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.664 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:32.923 [2024-12-06 13:12:19.888765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.923 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:32.923 "name": "Existed_Raid", 00:17:32.923 "aliases": [ 00:17:32.923 "11f87269-3ced-4999-8320-fd0f481c899e" 00:17:32.923 ], 00:17:32.923 "product_name": "Raid Volume", 00:17:32.923 "block_size": 512, 00:17:32.923 "num_blocks": 65536, 00:17:32.923 "uuid": "11f87269-3ced-4999-8320-fd0f481c899e", 00:17:32.923 "assigned_rate_limits": { 00:17:32.923 "rw_ios_per_sec": 0, 00:17:32.923 "rw_mbytes_per_sec": 0, 00:17:32.923 "r_mbytes_per_sec": 0, 00:17:32.923 "w_mbytes_per_sec": 0 00:17:32.923 }, 00:17:32.923 "claimed": false, 00:17:32.923 "zoned": false, 00:17:32.923 "supported_io_types": { 00:17:32.923 "read": true, 00:17:32.923 "write": true, 00:17:32.923 "unmap": false, 00:17:32.923 "flush": false, 00:17:32.923 "reset": true, 00:17:32.923 "nvme_admin": false, 00:17:32.923 "nvme_io": false, 00:17:32.923 "nvme_io_md": false, 00:17:32.923 "write_zeroes": true, 00:17:32.923 "zcopy": false, 00:17:32.923 "get_zone_info": false, 00:17:32.923 "zone_management": false, 00:17:32.923 "zone_append": false, 00:17:32.923 "compare": false, 00:17:32.923 "compare_and_write": false, 00:17:32.923 "abort": false, 00:17:32.923 "seek_hole": false, 00:17:32.923 "seek_data": false, 00:17:32.923 "copy": false, 00:17:32.923 "nvme_iov_md": false 00:17:32.923 }, 00:17:32.923 "memory_domains": [ 00:17:32.923 { 00:17:32.923 "dma_device_id": "system", 00:17:32.923 "dma_device_type": 1 00:17:32.923 }, 00:17:32.923 { 00:17:32.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.923 "dma_device_type": 2 00:17:32.923 }, 00:17:32.923 { 00:17:32.923 "dma_device_id": "system", 00:17:32.923 "dma_device_type": 1 00:17:32.923 }, 00:17:32.923 { 00:17:32.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.923 "dma_device_type": 2 00:17:32.923 }, 00:17:32.923 { 00:17:32.923 "dma_device_id": "system", 00:17:32.923 "dma_device_type": 1 00:17:32.923 }, 00:17:32.923 { 00:17:32.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.923 "dma_device_type": 2 00:17:32.923 }, 00:17:32.923 { 00:17:32.923 "dma_device_id": "system", 00:17:32.923 "dma_device_type": 1 00:17:32.923 }, 00:17:32.923 { 00:17:32.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.923 "dma_device_type": 2 00:17:32.923 } 00:17:32.923 ], 00:17:32.923 "driver_specific": { 00:17:32.923 "raid": { 00:17:32.923 "uuid": "11f87269-3ced-4999-8320-fd0f481c899e", 00:17:32.923 "strip_size_kb": 0, 00:17:32.923 "state": "online", 00:17:32.923 "raid_level": "raid1", 00:17:32.923 "superblock": false, 00:17:32.923 "num_base_bdevs": 4, 00:17:32.923 "num_base_bdevs_discovered": 4, 00:17:32.923 "num_base_bdevs_operational": 4, 00:17:32.923 "base_bdevs_list": [ 00:17:32.923 { 00:17:32.923 "name": "BaseBdev1", 00:17:32.923 "uuid": "a1899ba2-8149-4504-ab54-dd363fdffe20", 00:17:32.923 "is_configured": true, 00:17:32.923 "data_offset": 0, 00:17:32.923 "data_size": 65536 00:17:32.923 }, 00:17:32.923 { 00:17:32.923 "name": "BaseBdev2", 00:17:32.923 "uuid": "7b411118-a28a-41cb-a2d0-ae92166e0dab", 00:17:32.923 "is_configured": true, 00:17:32.923 "data_offset": 0, 00:17:32.923 "data_size": 65536 00:17:32.923 }, 00:17:32.923 { 00:17:32.923 "name": "BaseBdev3", 00:17:32.923 "uuid": "56fd0a61-e4f6-4d17-8421-ca6a2e0ea864", 00:17:32.924 "is_configured": true, 00:17:32.924 "data_offset": 0, 00:17:32.924 "data_size": 65536 00:17:32.924 }, 00:17:32.924 { 00:17:32.924 "name": "BaseBdev4", 00:17:32.924 "uuid": "33465ce0-1e25-4bb0-95ed-296690c957be", 00:17:32.924 "is_configured": true, 00:17:32.924 "data_offset": 0, 00:17:32.924 "data_size": 65536 00:17:32.924 } 00:17:32.924 ] 00:17:32.924 } 00:17:32.924 } 00:17:32.924 }' 00:17:32.924 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.181 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:33.181 BaseBdev2 00:17:33.181 BaseBdev3 00:17:33.181 BaseBdev4' 00:17:33.181 13:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.181 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.438 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.438 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.438 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.438 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.438 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:33.438 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.439 [2024-12-06 13:12:20.268518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.439 "name": "Existed_Raid", 00:17:33.439 "uuid": "11f87269-3ced-4999-8320-fd0f481c899e", 00:17:33.439 "strip_size_kb": 0, 00:17:33.439 "state": "online", 00:17:33.439 "raid_level": "raid1", 00:17:33.439 "superblock": false, 00:17:33.439 "num_base_bdevs": 4, 00:17:33.439 "num_base_bdevs_discovered": 3, 00:17:33.439 "num_base_bdevs_operational": 3, 00:17:33.439 "base_bdevs_list": [ 00:17:33.439 { 00:17:33.439 "name": null, 00:17:33.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.439 "is_configured": false, 00:17:33.439 "data_offset": 0, 00:17:33.439 "data_size": 65536 00:17:33.439 }, 00:17:33.439 { 00:17:33.439 "name": "BaseBdev2", 00:17:33.439 "uuid": "7b411118-a28a-41cb-a2d0-ae92166e0dab", 00:17:33.439 "is_configured": true, 00:17:33.439 "data_offset": 0, 00:17:33.439 "data_size": 65536 00:17:33.439 }, 00:17:33.439 { 00:17:33.439 "name": "BaseBdev3", 00:17:33.439 "uuid": "56fd0a61-e4f6-4d17-8421-ca6a2e0ea864", 00:17:33.439 "is_configured": true, 00:17:33.439 "data_offset": 0, 00:17:33.439 "data_size": 65536 00:17:33.439 }, 00:17:33.439 { 00:17:33.439 "name": "BaseBdev4", 00:17:33.439 "uuid": "33465ce0-1e25-4bb0-95ed-296690c957be", 00:17:33.439 "is_configured": true, 00:17:33.439 "data_offset": 0, 00:17:33.439 "data_size": 65536 00:17:33.439 } 00:17:33.439 ] 00:17:33.439 }' 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.439 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.004 13:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.004 [2024-12-06 13:12:20.925622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.263 [2024-12-06 13:12:21.092805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.263 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.263 [2024-12-06 13:12:21.244440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:34.263 [2024-12-06 13:12:21.244593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.522 [2024-12-06 13:12:21.339532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.522 [2024-12-06 13:12:21.339810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.522 [2024-12-06 13:12:21.339848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.522 BaseBdev2 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.522 [ 00:17:34.522 { 00:17:34.522 "name": "BaseBdev2", 00:17:34.522 "aliases": [ 00:17:34.522 "eb286919-f6c1-497c-a8d1-e3cdd6546d0a" 00:17:34.522 ], 00:17:34.522 "product_name": "Malloc disk", 00:17:34.522 "block_size": 512, 00:17:34.522 "num_blocks": 65536, 00:17:34.522 "uuid": "eb286919-f6c1-497c-a8d1-e3cdd6546d0a", 00:17:34.522 "assigned_rate_limits": { 00:17:34.522 "rw_ios_per_sec": 0, 00:17:34.522 "rw_mbytes_per_sec": 0, 00:17:34.522 "r_mbytes_per_sec": 0, 00:17:34.522 "w_mbytes_per_sec": 0 00:17:34.522 }, 00:17:34.522 "claimed": false, 00:17:34.522 "zoned": false, 00:17:34.522 "supported_io_types": { 00:17:34.522 "read": true, 00:17:34.522 "write": true, 00:17:34.522 "unmap": true, 00:17:34.522 "flush": true, 00:17:34.522 "reset": true, 00:17:34.522 "nvme_admin": false, 00:17:34.522 "nvme_io": false, 00:17:34.522 "nvme_io_md": false, 00:17:34.522 "write_zeroes": true, 00:17:34.522 "zcopy": true, 00:17:34.522 "get_zone_info": false, 00:17:34.522 "zone_management": false, 00:17:34.522 "zone_append": false, 00:17:34.522 "compare": false, 00:17:34.522 "compare_and_write": false, 00:17:34.522 "abort": true, 00:17:34.522 "seek_hole": false, 00:17:34.522 "seek_data": false, 00:17:34.522 "copy": true, 00:17:34.522 "nvme_iov_md": false 00:17:34.522 }, 00:17:34.522 "memory_domains": [ 00:17:34.522 { 00:17:34.522 "dma_device_id": "system", 00:17:34.522 "dma_device_type": 1 00:17:34.522 }, 00:17:34.522 { 00:17:34.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.522 "dma_device_type": 2 00:17:34.522 } 00:17:34.522 ], 00:17:34.522 "driver_specific": {} 00:17:34.522 } 00:17:34.522 ] 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.522 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.781 BaseBdev3 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.781 [ 00:17:34.781 { 00:17:34.781 "name": "BaseBdev3", 00:17:34.781 "aliases": [ 00:17:34.781 "21daa463-4a46-47d2-8490-ef517a2fc9db" 00:17:34.781 ], 00:17:34.781 "product_name": "Malloc disk", 00:17:34.781 "block_size": 512, 00:17:34.781 "num_blocks": 65536, 00:17:34.781 "uuid": "21daa463-4a46-47d2-8490-ef517a2fc9db", 00:17:34.781 "assigned_rate_limits": { 00:17:34.781 "rw_ios_per_sec": 0, 00:17:34.781 "rw_mbytes_per_sec": 0, 00:17:34.781 "r_mbytes_per_sec": 0, 00:17:34.781 "w_mbytes_per_sec": 0 00:17:34.781 }, 00:17:34.781 "claimed": false, 00:17:34.781 "zoned": false, 00:17:34.781 "supported_io_types": { 00:17:34.781 "read": true, 00:17:34.781 "write": true, 00:17:34.781 "unmap": true, 00:17:34.781 "flush": true, 00:17:34.781 "reset": true, 00:17:34.781 "nvme_admin": false, 00:17:34.781 "nvme_io": false, 00:17:34.781 "nvme_io_md": false, 00:17:34.781 "write_zeroes": true, 00:17:34.781 "zcopy": true, 00:17:34.781 "get_zone_info": false, 00:17:34.781 "zone_management": false, 00:17:34.781 "zone_append": false, 00:17:34.781 "compare": false, 00:17:34.781 "compare_and_write": false, 00:17:34.781 "abort": true, 00:17:34.781 "seek_hole": false, 00:17:34.781 "seek_data": false, 00:17:34.781 "copy": true, 00:17:34.781 "nvme_iov_md": false 00:17:34.781 }, 00:17:34.781 "memory_domains": [ 00:17:34.781 { 00:17:34.781 "dma_device_id": "system", 00:17:34.781 "dma_device_type": 1 00:17:34.781 }, 00:17:34.781 { 00:17:34.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.781 "dma_device_type": 2 00:17:34.781 } 00:17:34.781 ], 00:17:34.781 "driver_specific": {} 00:17:34.781 } 00:17:34.781 ] 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.781 BaseBdev4 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.781 [ 00:17:34.781 { 00:17:34.781 "name": "BaseBdev4", 00:17:34.781 "aliases": [ 00:17:34.781 "40872a0b-d047-4fce-b0fa-0dde40da6055" 00:17:34.781 ], 00:17:34.781 "product_name": "Malloc disk", 00:17:34.781 "block_size": 512, 00:17:34.781 "num_blocks": 65536, 00:17:34.781 "uuid": "40872a0b-d047-4fce-b0fa-0dde40da6055", 00:17:34.781 "assigned_rate_limits": { 00:17:34.781 "rw_ios_per_sec": 0, 00:17:34.781 "rw_mbytes_per_sec": 0, 00:17:34.781 "r_mbytes_per_sec": 0, 00:17:34.781 "w_mbytes_per_sec": 0 00:17:34.781 }, 00:17:34.781 "claimed": false, 00:17:34.781 "zoned": false, 00:17:34.781 "supported_io_types": { 00:17:34.781 "read": true, 00:17:34.781 "write": true, 00:17:34.781 "unmap": true, 00:17:34.781 "flush": true, 00:17:34.781 "reset": true, 00:17:34.781 "nvme_admin": false, 00:17:34.781 "nvme_io": false, 00:17:34.781 "nvme_io_md": false, 00:17:34.781 "write_zeroes": true, 00:17:34.781 "zcopy": true, 00:17:34.781 "get_zone_info": false, 00:17:34.781 "zone_management": false, 00:17:34.781 "zone_append": false, 00:17:34.781 "compare": false, 00:17:34.781 "compare_and_write": false, 00:17:34.781 "abort": true, 00:17:34.781 "seek_hole": false, 00:17:34.781 "seek_data": false, 00:17:34.781 "copy": true, 00:17:34.781 "nvme_iov_md": false 00:17:34.781 }, 00:17:34.781 "memory_domains": [ 00:17:34.781 { 00:17:34.781 "dma_device_id": "system", 00:17:34.781 "dma_device_type": 1 00:17:34.781 }, 00:17:34.781 { 00:17:34.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.781 "dma_device_type": 2 00:17:34.781 } 00:17:34.781 ], 00:17:34.781 "driver_specific": {} 00:17:34.781 } 00:17:34.781 ] 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:34.781 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.782 [2024-12-06 13:12:21.659336] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.782 [2024-12-06 13:12:21.659401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.782 [2024-12-06 13:12:21.659430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.782 [2024-12-06 13:12:21.662133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.782 [2024-12-06 13:12:21.662375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.782 "name": "Existed_Raid", 00:17:34.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.782 "strip_size_kb": 0, 00:17:34.782 "state": "configuring", 00:17:34.782 "raid_level": "raid1", 00:17:34.782 "superblock": false, 00:17:34.782 "num_base_bdevs": 4, 00:17:34.782 "num_base_bdevs_discovered": 3, 00:17:34.782 "num_base_bdevs_operational": 4, 00:17:34.782 "base_bdevs_list": [ 00:17:34.782 { 00:17:34.782 "name": "BaseBdev1", 00:17:34.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.782 "is_configured": false, 00:17:34.782 "data_offset": 0, 00:17:34.782 "data_size": 0 00:17:34.782 }, 00:17:34.782 { 00:17:34.782 "name": "BaseBdev2", 00:17:34.782 "uuid": "eb286919-f6c1-497c-a8d1-e3cdd6546d0a", 00:17:34.782 "is_configured": true, 00:17:34.782 "data_offset": 0, 00:17:34.782 "data_size": 65536 00:17:34.782 }, 00:17:34.782 { 00:17:34.782 "name": "BaseBdev3", 00:17:34.782 "uuid": "21daa463-4a46-47d2-8490-ef517a2fc9db", 00:17:34.782 "is_configured": true, 00:17:34.782 "data_offset": 0, 00:17:34.782 "data_size": 65536 00:17:34.782 }, 00:17:34.782 { 00:17:34.782 "name": "BaseBdev4", 00:17:34.782 "uuid": "40872a0b-d047-4fce-b0fa-0dde40da6055", 00:17:34.782 "is_configured": true, 00:17:34.782 "data_offset": 0, 00:17:34.782 "data_size": 65536 00:17:34.782 } 00:17:34.782 ] 00:17:34.782 }' 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.782 13:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.348 [2024-12-06 13:12:22.203626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.348 "name": "Existed_Raid", 00:17:35.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.348 "strip_size_kb": 0, 00:17:35.348 "state": "configuring", 00:17:35.348 "raid_level": "raid1", 00:17:35.348 "superblock": false, 00:17:35.348 "num_base_bdevs": 4, 00:17:35.348 "num_base_bdevs_discovered": 2, 00:17:35.348 "num_base_bdevs_operational": 4, 00:17:35.348 "base_bdevs_list": [ 00:17:35.348 { 00:17:35.348 "name": "BaseBdev1", 00:17:35.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.348 "is_configured": false, 00:17:35.348 "data_offset": 0, 00:17:35.348 "data_size": 0 00:17:35.348 }, 00:17:35.348 { 00:17:35.348 "name": null, 00:17:35.348 "uuid": "eb286919-f6c1-497c-a8d1-e3cdd6546d0a", 00:17:35.348 "is_configured": false, 00:17:35.348 "data_offset": 0, 00:17:35.348 "data_size": 65536 00:17:35.348 }, 00:17:35.348 { 00:17:35.348 "name": "BaseBdev3", 00:17:35.348 "uuid": "21daa463-4a46-47d2-8490-ef517a2fc9db", 00:17:35.348 "is_configured": true, 00:17:35.348 "data_offset": 0, 00:17:35.348 "data_size": 65536 00:17:35.348 }, 00:17:35.348 { 00:17:35.348 "name": "BaseBdev4", 00:17:35.348 "uuid": "40872a0b-d047-4fce-b0fa-0dde40da6055", 00:17:35.348 "is_configured": true, 00:17:35.348 "data_offset": 0, 00:17:35.348 "data_size": 65536 00:17:35.348 } 00:17:35.348 ] 00:17:35.348 }' 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.348 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.962 [2024-12-06 13:12:22.848631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.962 BaseBdev1 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.962 [ 00:17:35.962 { 00:17:35.962 "name": "BaseBdev1", 00:17:35.962 "aliases": [ 00:17:35.962 "d38fc1a7-e708-4ab5-978a-f7a6e8b62f16" 00:17:35.962 ], 00:17:35.962 "product_name": "Malloc disk", 00:17:35.962 "block_size": 512, 00:17:35.962 "num_blocks": 65536, 00:17:35.962 "uuid": "d38fc1a7-e708-4ab5-978a-f7a6e8b62f16", 00:17:35.962 "assigned_rate_limits": { 00:17:35.962 "rw_ios_per_sec": 0, 00:17:35.962 "rw_mbytes_per_sec": 0, 00:17:35.962 "r_mbytes_per_sec": 0, 00:17:35.962 "w_mbytes_per_sec": 0 00:17:35.962 }, 00:17:35.962 "claimed": true, 00:17:35.962 "claim_type": "exclusive_write", 00:17:35.962 "zoned": false, 00:17:35.962 "supported_io_types": { 00:17:35.962 "read": true, 00:17:35.962 "write": true, 00:17:35.962 "unmap": true, 00:17:35.962 "flush": true, 00:17:35.962 "reset": true, 00:17:35.962 "nvme_admin": false, 00:17:35.962 "nvme_io": false, 00:17:35.962 "nvme_io_md": false, 00:17:35.962 "write_zeroes": true, 00:17:35.962 "zcopy": true, 00:17:35.962 "get_zone_info": false, 00:17:35.962 "zone_management": false, 00:17:35.962 "zone_append": false, 00:17:35.962 "compare": false, 00:17:35.962 "compare_and_write": false, 00:17:35.962 "abort": true, 00:17:35.962 "seek_hole": false, 00:17:35.962 "seek_data": false, 00:17:35.962 "copy": true, 00:17:35.962 "nvme_iov_md": false 00:17:35.962 }, 00:17:35.962 "memory_domains": [ 00:17:35.962 { 00:17:35.962 "dma_device_id": "system", 00:17:35.962 "dma_device_type": 1 00:17:35.962 }, 00:17:35.962 { 00:17:35.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.962 "dma_device_type": 2 00:17:35.962 } 00:17:35.962 ], 00:17:35.962 "driver_specific": {} 00:17:35.962 } 00:17:35.962 ] 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.962 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.963 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.963 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.963 "name": "Existed_Raid", 00:17:35.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.963 "strip_size_kb": 0, 00:17:35.963 "state": "configuring", 00:17:35.963 "raid_level": "raid1", 00:17:35.963 "superblock": false, 00:17:35.963 "num_base_bdevs": 4, 00:17:35.963 "num_base_bdevs_discovered": 3, 00:17:35.963 "num_base_bdevs_operational": 4, 00:17:35.963 "base_bdevs_list": [ 00:17:35.963 { 00:17:35.963 "name": "BaseBdev1", 00:17:35.963 "uuid": "d38fc1a7-e708-4ab5-978a-f7a6e8b62f16", 00:17:35.963 "is_configured": true, 00:17:35.963 "data_offset": 0, 00:17:35.963 "data_size": 65536 00:17:35.963 }, 00:17:35.963 { 00:17:35.963 "name": null, 00:17:35.963 "uuid": "eb286919-f6c1-497c-a8d1-e3cdd6546d0a", 00:17:35.963 "is_configured": false, 00:17:35.963 "data_offset": 0, 00:17:35.963 "data_size": 65536 00:17:35.963 }, 00:17:35.963 { 00:17:35.963 "name": "BaseBdev3", 00:17:35.963 "uuid": "21daa463-4a46-47d2-8490-ef517a2fc9db", 00:17:35.963 "is_configured": true, 00:17:35.963 "data_offset": 0, 00:17:35.963 "data_size": 65536 00:17:35.963 }, 00:17:35.963 { 00:17:35.963 "name": "BaseBdev4", 00:17:35.963 "uuid": "40872a0b-d047-4fce-b0fa-0dde40da6055", 00:17:35.963 "is_configured": true, 00:17:35.963 "data_offset": 0, 00:17:35.963 "data_size": 65536 00:17:35.963 } 00:17:35.963 ] 00:17:35.963 }' 00:17:35.963 13:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.963 13:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.529 [2024-12-06 13:12:23.456943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.529 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.529 "name": "Existed_Raid", 00:17:36.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.529 "strip_size_kb": 0, 00:17:36.529 "state": "configuring", 00:17:36.530 "raid_level": "raid1", 00:17:36.530 "superblock": false, 00:17:36.530 "num_base_bdevs": 4, 00:17:36.530 "num_base_bdevs_discovered": 2, 00:17:36.530 "num_base_bdevs_operational": 4, 00:17:36.530 "base_bdevs_list": [ 00:17:36.530 { 00:17:36.530 "name": "BaseBdev1", 00:17:36.530 "uuid": "d38fc1a7-e708-4ab5-978a-f7a6e8b62f16", 00:17:36.530 "is_configured": true, 00:17:36.530 "data_offset": 0, 00:17:36.530 "data_size": 65536 00:17:36.530 }, 00:17:36.530 { 00:17:36.530 "name": null, 00:17:36.530 "uuid": "eb286919-f6c1-497c-a8d1-e3cdd6546d0a", 00:17:36.530 "is_configured": false, 00:17:36.530 "data_offset": 0, 00:17:36.530 "data_size": 65536 00:17:36.530 }, 00:17:36.530 { 00:17:36.530 "name": null, 00:17:36.530 "uuid": "21daa463-4a46-47d2-8490-ef517a2fc9db", 00:17:36.530 "is_configured": false, 00:17:36.530 "data_offset": 0, 00:17:36.530 "data_size": 65536 00:17:36.530 }, 00:17:36.530 { 00:17:36.530 "name": "BaseBdev4", 00:17:36.530 "uuid": "40872a0b-d047-4fce-b0fa-0dde40da6055", 00:17:36.530 "is_configured": true, 00:17:36.530 "data_offset": 0, 00:17:36.530 "data_size": 65536 00:17:36.530 } 00:17:36.530 ] 00:17:36.530 }' 00:17:36.530 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.530 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.097 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.097 13:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:37.097 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.097 13:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.097 [2024-12-06 13:12:24.049145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.097 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.098 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.098 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.098 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.098 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.098 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.098 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.098 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.098 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.098 "name": "Existed_Raid", 00:17:37.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.098 "strip_size_kb": 0, 00:17:37.098 "state": "configuring", 00:17:37.098 "raid_level": "raid1", 00:17:37.098 "superblock": false, 00:17:37.098 "num_base_bdevs": 4, 00:17:37.098 "num_base_bdevs_discovered": 3, 00:17:37.098 "num_base_bdevs_operational": 4, 00:17:37.098 "base_bdevs_list": [ 00:17:37.098 { 00:17:37.098 "name": "BaseBdev1", 00:17:37.098 "uuid": "d38fc1a7-e708-4ab5-978a-f7a6e8b62f16", 00:17:37.098 "is_configured": true, 00:17:37.098 "data_offset": 0, 00:17:37.098 "data_size": 65536 00:17:37.098 }, 00:17:37.098 { 00:17:37.098 "name": null, 00:17:37.098 "uuid": "eb286919-f6c1-497c-a8d1-e3cdd6546d0a", 00:17:37.098 "is_configured": false, 00:17:37.098 "data_offset": 0, 00:17:37.098 "data_size": 65536 00:17:37.098 }, 00:17:37.098 { 00:17:37.098 "name": "BaseBdev3", 00:17:37.098 "uuid": "21daa463-4a46-47d2-8490-ef517a2fc9db", 00:17:37.098 "is_configured": true, 00:17:37.098 "data_offset": 0, 00:17:37.098 "data_size": 65536 00:17:37.098 }, 00:17:37.098 { 00:17:37.098 "name": "BaseBdev4", 00:17:37.098 "uuid": "40872a0b-d047-4fce-b0fa-0dde40da6055", 00:17:37.098 "is_configured": true, 00:17:37.098 "data_offset": 0, 00:17:37.098 "data_size": 65536 00:17:37.098 } 00:17:37.098 ] 00:17:37.098 }' 00:17:37.098 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.098 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.666 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.666 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.666 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.666 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:37.666 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.666 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:37.666 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:37.666 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.666 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.666 [2024-12-06 13:12:24.633421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.926 "name": "Existed_Raid", 00:17:37.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.926 "strip_size_kb": 0, 00:17:37.926 "state": "configuring", 00:17:37.926 "raid_level": "raid1", 00:17:37.926 "superblock": false, 00:17:37.926 "num_base_bdevs": 4, 00:17:37.926 "num_base_bdevs_discovered": 2, 00:17:37.926 "num_base_bdevs_operational": 4, 00:17:37.926 "base_bdevs_list": [ 00:17:37.926 { 00:17:37.926 "name": null, 00:17:37.926 "uuid": "d38fc1a7-e708-4ab5-978a-f7a6e8b62f16", 00:17:37.926 "is_configured": false, 00:17:37.926 "data_offset": 0, 00:17:37.926 "data_size": 65536 00:17:37.926 }, 00:17:37.926 { 00:17:37.926 "name": null, 00:17:37.926 "uuid": "eb286919-f6c1-497c-a8d1-e3cdd6546d0a", 00:17:37.926 "is_configured": false, 00:17:37.926 "data_offset": 0, 00:17:37.926 "data_size": 65536 00:17:37.926 }, 00:17:37.926 { 00:17:37.926 "name": "BaseBdev3", 00:17:37.926 "uuid": "21daa463-4a46-47d2-8490-ef517a2fc9db", 00:17:37.926 "is_configured": true, 00:17:37.926 "data_offset": 0, 00:17:37.926 "data_size": 65536 00:17:37.926 }, 00:17:37.926 { 00:17:37.926 "name": "BaseBdev4", 00:17:37.926 "uuid": "40872a0b-d047-4fce-b0fa-0dde40da6055", 00:17:37.926 "is_configured": true, 00:17:37.926 "data_offset": 0, 00:17:37.926 "data_size": 65536 00:17:37.926 } 00:17:37.926 ] 00:17:37.926 }' 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.926 13:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.492 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.492 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.492 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:38.492 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.492 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.492 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.493 [2024-12-06 13:12:25.308524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.493 "name": "Existed_Raid", 00:17:38.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.493 "strip_size_kb": 0, 00:17:38.493 "state": "configuring", 00:17:38.493 "raid_level": "raid1", 00:17:38.493 "superblock": false, 00:17:38.493 "num_base_bdevs": 4, 00:17:38.493 "num_base_bdevs_discovered": 3, 00:17:38.493 "num_base_bdevs_operational": 4, 00:17:38.493 "base_bdevs_list": [ 00:17:38.493 { 00:17:38.493 "name": null, 00:17:38.493 "uuid": "d38fc1a7-e708-4ab5-978a-f7a6e8b62f16", 00:17:38.493 "is_configured": false, 00:17:38.493 "data_offset": 0, 00:17:38.493 "data_size": 65536 00:17:38.493 }, 00:17:38.493 { 00:17:38.493 "name": "BaseBdev2", 00:17:38.493 "uuid": "eb286919-f6c1-497c-a8d1-e3cdd6546d0a", 00:17:38.493 "is_configured": true, 00:17:38.493 "data_offset": 0, 00:17:38.493 "data_size": 65536 00:17:38.493 }, 00:17:38.493 { 00:17:38.493 "name": "BaseBdev3", 00:17:38.493 "uuid": "21daa463-4a46-47d2-8490-ef517a2fc9db", 00:17:38.493 "is_configured": true, 00:17:38.493 "data_offset": 0, 00:17:38.493 "data_size": 65536 00:17:38.493 }, 00:17:38.493 { 00:17:38.493 "name": "BaseBdev4", 00:17:38.493 "uuid": "40872a0b-d047-4fce-b0fa-0dde40da6055", 00:17:38.493 "is_configured": true, 00:17:38.493 "data_offset": 0, 00:17:38.493 "data_size": 65536 00:17:38.493 } 00:17:38.493 ] 00:17:38.493 }' 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.493 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d38fc1a7-e708-4ab5-978a-f7a6e8b62f16 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.059 13:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.059 [2024-12-06 13:12:26.029246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:39.059 [2024-12-06 13:12:26.029323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:39.059 [2024-12-06 13:12:26.029355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:39.059 [2024-12-06 13:12:26.029780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:39.059 [2024-12-06 13:12:26.030086] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:39.059 [2024-12-06 13:12:26.030109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:39.059 [2024-12-06 13:12:26.030466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.059 NewBaseBdev 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.059 [ 00:17:39.059 { 00:17:39.059 "name": "NewBaseBdev", 00:17:39.059 "aliases": [ 00:17:39.059 "d38fc1a7-e708-4ab5-978a-f7a6e8b62f16" 00:17:39.059 ], 00:17:39.059 "product_name": "Malloc disk", 00:17:39.059 "block_size": 512, 00:17:39.059 "num_blocks": 65536, 00:17:39.059 "uuid": "d38fc1a7-e708-4ab5-978a-f7a6e8b62f16", 00:17:39.059 "assigned_rate_limits": { 00:17:39.059 "rw_ios_per_sec": 0, 00:17:39.059 "rw_mbytes_per_sec": 0, 00:17:39.059 "r_mbytes_per_sec": 0, 00:17:39.059 "w_mbytes_per_sec": 0 00:17:39.059 }, 00:17:39.059 "claimed": true, 00:17:39.059 "claim_type": "exclusive_write", 00:17:39.059 "zoned": false, 00:17:39.059 "supported_io_types": { 00:17:39.059 "read": true, 00:17:39.059 "write": true, 00:17:39.059 "unmap": true, 00:17:39.059 "flush": true, 00:17:39.059 "reset": true, 00:17:39.059 "nvme_admin": false, 00:17:39.059 "nvme_io": false, 00:17:39.059 "nvme_io_md": false, 00:17:39.059 "write_zeroes": true, 00:17:39.059 "zcopy": true, 00:17:39.059 "get_zone_info": false, 00:17:39.059 "zone_management": false, 00:17:39.059 "zone_append": false, 00:17:39.059 "compare": false, 00:17:39.059 "compare_and_write": false, 00:17:39.059 "abort": true, 00:17:39.059 "seek_hole": false, 00:17:39.059 "seek_data": false, 00:17:39.059 "copy": true, 00:17:39.059 "nvme_iov_md": false 00:17:39.059 }, 00:17:39.059 "memory_domains": [ 00:17:39.059 { 00:17:39.059 "dma_device_id": "system", 00:17:39.059 "dma_device_type": 1 00:17:39.059 }, 00:17:39.059 { 00:17:39.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.059 "dma_device_type": 2 00:17:39.059 } 00:17:39.059 ], 00:17:39.059 "driver_specific": {} 00:17:39.059 } 00:17:39.059 ] 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.059 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.060 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.060 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.060 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.060 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.060 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.317 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.317 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.317 "name": "Existed_Raid", 00:17:39.317 "uuid": "50941224-32a9-46d1-96a2-4c6d40a368a3", 00:17:39.317 "strip_size_kb": 0, 00:17:39.317 "state": "online", 00:17:39.317 "raid_level": "raid1", 00:17:39.317 "superblock": false, 00:17:39.317 "num_base_bdevs": 4, 00:17:39.317 "num_base_bdevs_discovered": 4, 00:17:39.317 "num_base_bdevs_operational": 4, 00:17:39.317 "base_bdevs_list": [ 00:17:39.317 { 00:17:39.317 "name": "NewBaseBdev", 00:17:39.317 "uuid": "d38fc1a7-e708-4ab5-978a-f7a6e8b62f16", 00:17:39.317 "is_configured": true, 00:17:39.317 "data_offset": 0, 00:17:39.317 "data_size": 65536 00:17:39.317 }, 00:17:39.317 { 00:17:39.317 "name": "BaseBdev2", 00:17:39.317 "uuid": "eb286919-f6c1-497c-a8d1-e3cdd6546d0a", 00:17:39.317 "is_configured": true, 00:17:39.317 "data_offset": 0, 00:17:39.317 "data_size": 65536 00:17:39.317 }, 00:17:39.317 { 00:17:39.317 "name": "BaseBdev3", 00:17:39.317 "uuid": "21daa463-4a46-47d2-8490-ef517a2fc9db", 00:17:39.317 "is_configured": true, 00:17:39.317 "data_offset": 0, 00:17:39.317 "data_size": 65536 00:17:39.317 }, 00:17:39.317 { 00:17:39.317 "name": "BaseBdev4", 00:17:39.317 "uuid": "40872a0b-d047-4fce-b0fa-0dde40da6055", 00:17:39.317 "is_configured": true, 00:17:39.317 "data_offset": 0, 00:17:39.317 "data_size": 65536 00:17:39.317 } 00:17:39.317 ] 00:17:39.317 }' 00:17:39.317 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.317 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:39.887 [2024-12-06 13:12:26.614071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:39.887 "name": "Existed_Raid", 00:17:39.887 "aliases": [ 00:17:39.887 "50941224-32a9-46d1-96a2-4c6d40a368a3" 00:17:39.887 ], 00:17:39.887 "product_name": "Raid Volume", 00:17:39.887 "block_size": 512, 00:17:39.887 "num_blocks": 65536, 00:17:39.887 "uuid": "50941224-32a9-46d1-96a2-4c6d40a368a3", 00:17:39.887 "assigned_rate_limits": { 00:17:39.887 "rw_ios_per_sec": 0, 00:17:39.887 "rw_mbytes_per_sec": 0, 00:17:39.887 "r_mbytes_per_sec": 0, 00:17:39.887 "w_mbytes_per_sec": 0 00:17:39.887 }, 00:17:39.887 "claimed": false, 00:17:39.887 "zoned": false, 00:17:39.887 "supported_io_types": { 00:17:39.887 "read": true, 00:17:39.887 "write": true, 00:17:39.887 "unmap": false, 00:17:39.887 "flush": false, 00:17:39.887 "reset": true, 00:17:39.887 "nvme_admin": false, 00:17:39.887 "nvme_io": false, 00:17:39.887 "nvme_io_md": false, 00:17:39.887 "write_zeroes": true, 00:17:39.887 "zcopy": false, 00:17:39.887 "get_zone_info": false, 00:17:39.887 "zone_management": false, 00:17:39.887 "zone_append": false, 00:17:39.887 "compare": false, 00:17:39.887 "compare_and_write": false, 00:17:39.887 "abort": false, 00:17:39.887 "seek_hole": false, 00:17:39.887 "seek_data": false, 00:17:39.887 "copy": false, 00:17:39.887 "nvme_iov_md": false 00:17:39.887 }, 00:17:39.887 "memory_domains": [ 00:17:39.887 { 00:17:39.887 "dma_device_id": "system", 00:17:39.887 "dma_device_type": 1 00:17:39.887 }, 00:17:39.887 { 00:17:39.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.887 "dma_device_type": 2 00:17:39.887 }, 00:17:39.887 { 00:17:39.887 "dma_device_id": "system", 00:17:39.887 "dma_device_type": 1 00:17:39.887 }, 00:17:39.887 { 00:17:39.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.887 "dma_device_type": 2 00:17:39.887 }, 00:17:39.887 { 00:17:39.887 "dma_device_id": "system", 00:17:39.887 "dma_device_type": 1 00:17:39.887 }, 00:17:39.887 { 00:17:39.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.887 "dma_device_type": 2 00:17:39.887 }, 00:17:39.887 { 00:17:39.887 "dma_device_id": "system", 00:17:39.887 "dma_device_type": 1 00:17:39.887 }, 00:17:39.887 { 00:17:39.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.887 "dma_device_type": 2 00:17:39.887 } 00:17:39.887 ], 00:17:39.887 "driver_specific": { 00:17:39.887 "raid": { 00:17:39.887 "uuid": "50941224-32a9-46d1-96a2-4c6d40a368a3", 00:17:39.887 "strip_size_kb": 0, 00:17:39.887 "state": "online", 00:17:39.887 "raid_level": "raid1", 00:17:39.887 "superblock": false, 00:17:39.887 "num_base_bdevs": 4, 00:17:39.887 "num_base_bdevs_discovered": 4, 00:17:39.887 "num_base_bdevs_operational": 4, 00:17:39.887 "base_bdevs_list": [ 00:17:39.887 { 00:17:39.887 "name": "NewBaseBdev", 00:17:39.887 "uuid": "d38fc1a7-e708-4ab5-978a-f7a6e8b62f16", 00:17:39.887 "is_configured": true, 00:17:39.887 "data_offset": 0, 00:17:39.887 "data_size": 65536 00:17:39.887 }, 00:17:39.887 { 00:17:39.887 "name": "BaseBdev2", 00:17:39.887 "uuid": "eb286919-f6c1-497c-a8d1-e3cdd6546d0a", 00:17:39.887 "is_configured": true, 00:17:39.887 "data_offset": 0, 00:17:39.887 "data_size": 65536 00:17:39.887 }, 00:17:39.887 { 00:17:39.887 "name": "BaseBdev3", 00:17:39.887 "uuid": "21daa463-4a46-47d2-8490-ef517a2fc9db", 00:17:39.887 "is_configured": true, 00:17:39.887 "data_offset": 0, 00:17:39.887 "data_size": 65536 00:17:39.887 }, 00:17:39.887 { 00:17:39.887 "name": "BaseBdev4", 00:17:39.887 "uuid": "40872a0b-d047-4fce-b0fa-0dde40da6055", 00:17:39.887 "is_configured": true, 00:17:39.887 "data_offset": 0, 00:17:39.887 "data_size": 65536 00:17:39.887 } 00:17:39.887 ] 00:17:39.887 } 00:17:39.887 } 00:17:39.887 }' 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:39.887 BaseBdev2 00:17:39.887 BaseBdev3 00:17:39.887 BaseBdev4' 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.887 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.169 13:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.169 [2024-12-06 13:12:27.061725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.169 [2024-12-06 13:12:27.061768] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.169 [2024-12-06 13:12:27.061934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.169 [2024-12-06 13:12:27.062411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.169 [2024-12-06 13:12:27.062434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73580 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73580 ']' 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73580 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73580 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:40.169 killing process with pid 73580 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73580' 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73580 00:17:40.169 [2024-12-06 13:12:27.106510] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.169 13:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73580 00:17:40.733 [2024-12-06 13:12:27.482764] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.665 ************************************ 00:17:41.665 END TEST raid_state_function_test 00:17:41.665 ************************************ 00:17:41.665 13:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:41.665 00:17:41.665 real 0m13.319s 00:17:41.665 user 0m21.843s 00:17:41.665 sys 0m1.978s 00:17:41.665 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.665 13:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.665 13:12:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:17:41.665 13:12:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:41.665 13:12:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.665 13:12:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.923 ************************************ 00:17:41.923 START TEST raid_state_function_test_sb 00:17:41.923 ************************************ 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74269 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:41.923 Process raid pid: 74269 00:17:41.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74269' 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74269 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74269 ']' 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.923 13:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.923 [2024-12-06 13:12:28.811183] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:17:41.923 [2024-12-06 13:12:28.811415] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.180 [2024-12-06 13:12:29.003901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.181 [2024-12-06 13:12:29.152064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.438 [2024-12-06 13:12:29.381731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.438 [2024-12-06 13:12:29.381989] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.005 [2024-12-06 13:12:29.760586] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:43.005 [2024-12-06 13:12:29.760684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:43.005 [2024-12-06 13:12:29.760702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.005 [2024-12-06 13:12:29.760719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.005 [2024-12-06 13:12:29.760730] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:43.005 [2024-12-06 13:12:29.760744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:43.005 [2024-12-06 13:12:29.760754] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:43.005 [2024-12-06 13:12:29.760769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.005 "name": "Existed_Raid", 00:17:43.005 "uuid": "de36c6f0-b39c-4c2d-b206-cb25c3540927", 00:17:43.005 "strip_size_kb": 0, 00:17:43.005 "state": "configuring", 00:17:43.005 "raid_level": "raid1", 00:17:43.005 "superblock": true, 00:17:43.005 "num_base_bdevs": 4, 00:17:43.005 "num_base_bdevs_discovered": 0, 00:17:43.005 "num_base_bdevs_operational": 4, 00:17:43.005 "base_bdevs_list": [ 00:17:43.005 { 00:17:43.005 "name": "BaseBdev1", 00:17:43.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.005 "is_configured": false, 00:17:43.005 "data_offset": 0, 00:17:43.005 "data_size": 0 00:17:43.005 }, 00:17:43.005 { 00:17:43.005 "name": "BaseBdev2", 00:17:43.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.005 "is_configured": false, 00:17:43.005 "data_offset": 0, 00:17:43.005 "data_size": 0 00:17:43.005 }, 00:17:43.005 { 00:17:43.005 "name": "BaseBdev3", 00:17:43.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.005 "is_configured": false, 00:17:43.005 "data_offset": 0, 00:17:43.005 "data_size": 0 00:17:43.005 }, 00:17:43.005 { 00:17:43.005 "name": "BaseBdev4", 00:17:43.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.005 "is_configured": false, 00:17:43.005 "data_offset": 0, 00:17:43.005 "data_size": 0 00:17:43.005 } 00:17:43.005 ] 00:17:43.005 }' 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.005 13:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.572 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:43.572 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.572 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.572 [2024-12-06 13:12:30.296656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.572 [2024-12-06 13:12:30.296710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:43.572 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.572 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:43.572 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.572 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.572 [2024-12-06 13:12:30.304628] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:43.572 [2024-12-06 13:12:30.304701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:43.572 [2024-12-06 13:12:30.304718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.572 [2024-12-06 13:12:30.304740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.572 [2024-12-06 13:12:30.304751] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:43.572 [2024-12-06 13:12:30.304766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:43.572 [2024-12-06 13:12:30.304776] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:43.572 [2024-12-06 13:12:30.304790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:43.572 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.572 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:43.572 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.572 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.573 [2024-12-06 13:12:30.352554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.573 BaseBdev1 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.573 [ 00:17:43.573 { 00:17:43.573 "name": "BaseBdev1", 00:17:43.573 "aliases": [ 00:17:43.573 "d28f5235-1bc3-48a4-b393-c1d54ee79491" 00:17:43.573 ], 00:17:43.573 "product_name": "Malloc disk", 00:17:43.573 "block_size": 512, 00:17:43.573 "num_blocks": 65536, 00:17:43.573 "uuid": "d28f5235-1bc3-48a4-b393-c1d54ee79491", 00:17:43.573 "assigned_rate_limits": { 00:17:43.573 "rw_ios_per_sec": 0, 00:17:43.573 "rw_mbytes_per_sec": 0, 00:17:43.573 "r_mbytes_per_sec": 0, 00:17:43.573 "w_mbytes_per_sec": 0 00:17:43.573 }, 00:17:43.573 "claimed": true, 00:17:43.573 "claim_type": "exclusive_write", 00:17:43.573 "zoned": false, 00:17:43.573 "supported_io_types": { 00:17:43.573 "read": true, 00:17:43.573 "write": true, 00:17:43.573 "unmap": true, 00:17:43.573 "flush": true, 00:17:43.573 "reset": true, 00:17:43.573 "nvme_admin": false, 00:17:43.573 "nvme_io": false, 00:17:43.573 "nvme_io_md": false, 00:17:43.573 "write_zeroes": true, 00:17:43.573 "zcopy": true, 00:17:43.573 "get_zone_info": false, 00:17:43.573 "zone_management": false, 00:17:43.573 "zone_append": false, 00:17:43.573 "compare": false, 00:17:43.573 "compare_and_write": false, 00:17:43.573 "abort": true, 00:17:43.573 "seek_hole": false, 00:17:43.573 "seek_data": false, 00:17:43.573 "copy": true, 00:17:43.573 "nvme_iov_md": false 00:17:43.573 }, 00:17:43.573 "memory_domains": [ 00:17:43.573 { 00:17:43.573 "dma_device_id": "system", 00:17:43.573 "dma_device_type": 1 00:17:43.573 }, 00:17:43.573 { 00:17:43.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.573 "dma_device_type": 2 00:17:43.573 } 00:17:43.573 ], 00:17:43.573 "driver_specific": {} 00:17:43.573 } 00:17:43.573 ] 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.573 "name": "Existed_Raid", 00:17:43.573 "uuid": "2d70de73-16fd-4902-b94a-5c41873bf3d1", 00:17:43.573 "strip_size_kb": 0, 00:17:43.573 "state": "configuring", 00:17:43.573 "raid_level": "raid1", 00:17:43.573 "superblock": true, 00:17:43.573 "num_base_bdevs": 4, 00:17:43.573 "num_base_bdevs_discovered": 1, 00:17:43.573 "num_base_bdevs_operational": 4, 00:17:43.573 "base_bdevs_list": [ 00:17:43.573 { 00:17:43.573 "name": "BaseBdev1", 00:17:43.573 "uuid": "d28f5235-1bc3-48a4-b393-c1d54ee79491", 00:17:43.573 "is_configured": true, 00:17:43.573 "data_offset": 2048, 00:17:43.573 "data_size": 63488 00:17:43.573 }, 00:17:43.573 { 00:17:43.573 "name": "BaseBdev2", 00:17:43.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.573 "is_configured": false, 00:17:43.573 "data_offset": 0, 00:17:43.573 "data_size": 0 00:17:43.573 }, 00:17:43.573 { 00:17:43.573 "name": "BaseBdev3", 00:17:43.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.573 "is_configured": false, 00:17:43.573 "data_offset": 0, 00:17:43.573 "data_size": 0 00:17:43.573 }, 00:17:43.573 { 00:17:43.573 "name": "BaseBdev4", 00:17:43.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.573 "is_configured": false, 00:17:43.573 "data_offset": 0, 00:17:43.573 "data_size": 0 00:17:43.573 } 00:17:43.573 ] 00:17:43.573 }' 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.573 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.140 [2024-12-06 13:12:30.925046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.140 [2024-12-06 13:12:30.925134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.140 [2024-12-06 13:12:30.937119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.140 [2024-12-06 13:12:30.939937] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.140 [2024-12-06 13:12:30.940213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.140 [2024-12-06 13:12:30.940243] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.140 [2024-12-06 13:12:30.940264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.140 [2024-12-06 13:12:30.940275] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:44.140 [2024-12-06 13:12:30.940291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.140 "name": "Existed_Raid", 00:17:44.140 "uuid": "904e04a0-4024-4f1d-8d80-4a63daa142c0", 00:17:44.140 "strip_size_kb": 0, 00:17:44.140 "state": "configuring", 00:17:44.140 "raid_level": "raid1", 00:17:44.140 "superblock": true, 00:17:44.140 "num_base_bdevs": 4, 00:17:44.140 "num_base_bdevs_discovered": 1, 00:17:44.140 "num_base_bdevs_operational": 4, 00:17:44.140 "base_bdevs_list": [ 00:17:44.140 { 00:17:44.140 "name": "BaseBdev1", 00:17:44.140 "uuid": "d28f5235-1bc3-48a4-b393-c1d54ee79491", 00:17:44.140 "is_configured": true, 00:17:44.140 "data_offset": 2048, 00:17:44.140 "data_size": 63488 00:17:44.140 }, 00:17:44.140 { 00:17:44.140 "name": "BaseBdev2", 00:17:44.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.140 "is_configured": false, 00:17:44.140 "data_offset": 0, 00:17:44.140 "data_size": 0 00:17:44.140 }, 00:17:44.140 { 00:17:44.140 "name": "BaseBdev3", 00:17:44.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.140 "is_configured": false, 00:17:44.140 "data_offset": 0, 00:17:44.140 "data_size": 0 00:17:44.140 }, 00:17:44.140 { 00:17:44.140 "name": "BaseBdev4", 00:17:44.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.140 "is_configured": false, 00:17:44.140 "data_offset": 0, 00:17:44.140 "data_size": 0 00:17:44.140 } 00:17:44.140 ] 00:17:44.140 }' 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.140 13:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.707 [2024-12-06 13:12:31.532251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.707 BaseBdev2 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.707 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.707 [ 00:17:44.707 { 00:17:44.707 "name": "BaseBdev2", 00:17:44.707 "aliases": [ 00:17:44.707 "2335edc5-9a6d-4751-84c8-2160f3ff34f0" 00:17:44.707 ], 00:17:44.707 "product_name": "Malloc disk", 00:17:44.707 "block_size": 512, 00:17:44.707 "num_blocks": 65536, 00:17:44.707 "uuid": "2335edc5-9a6d-4751-84c8-2160f3ff34f0", 00:17:44.707 "assigned_rate_limits": { 00:17:44.707 "rw_ios_per_sec": 0, 00:17:44.707 "rw_mbytes_per_sec": 0, 00:17:44.707 "r_mbytes_per_sec": 0, 00:17:44.707 "w_mbytes_per_sec": 0 00:17:44.707 }, 00:17:44.707 "claimed": true, 00:17:44.707 "claim_type": "exclusive_write", 00:17:44.707 "zoned": false, 00:17:44.707 "supported_io_types": { 00:17:44.707 "read": true, 00:17:44.707 "write": true, 00:17:44.708 "unmap": true, 00:17:44.708 "flush": true, 00:17:44.708 "reset": true, 00:17:44.708 "nvme_admin": false, 00:17:44.708 "nvme_io": false, 00:17:44.708 "nvme_io_md": false, 00:17:44.708 "write_zeroes": true, 00:17:44.708 "zcopy": true, 00:17:44.708 "get_zone_info": false, 00:17:44.708 "zone_management": false, 00:17:44.708 "zone_append": false, 00:17:44.708 "compare": false, 00:17:44.708 "compare_and_write": false, 00:17:44.708 "abort": true, 00:17:44.708 "seek_hole": false, 00:17:44.708 "seek_data": false, 00:17:44.708 "copy": true, 00:17:44.708 "nvme_iov_md": false 00:17:44.708 }, 00:17:44.708 "memory_domains": [ 00:17:44.708 { 00:17:44.708 "dma_device_id": "system", 00:17:44.708 "dma_device_type": 1 00:17:44.708 }, 00:17:44.708 { 00:17:44.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.708 "dma_device_type": 2 00:17:44.708 } 00:17:44.708 ], 00:17:44.708 "driver_specific": {} 00:17:44.708 } 00:17:44.708 ] 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.708 "name": "Existed_Raid", 00:17:44.708 "uuid": "904e04a0-4024-4f1d-8d80-4a63daa142c0", 00:17:44.708 "strip_size_kb": 0, 00:17:44.708 "state": "configuring", 00:17:44.708 "raid_level": "raid1", 00:17:44.708 "superblock": true, 00:17:44.708 "num_base_bdevs": 4, 00:17:44.708 "num_base_bdevs_discovered": 2, 00:17:44.708 "num_base_bdevs_operational": 4, 00:17:44.708 "base_bdevs_list": [ 00:17:44.708 { 00:17:44.708 "name": "BaseBdev1", 00:17:44.708 "uuid": "d28f5235-1bc3-48a4-b393-c1d54ee79491", 00:17:44.708 "is_configured": true, 00:17:44.708 "data_offset": 2048, 00:17:44.708 "data_size": 63488 00:17:44.708 }, 00:17:44.708 { 00:17:44.708 "name": "BaseBdev2", 00:17:44.708 "uuid": "2335edc5-9a6d-4751-84c8-2160f3ff34f0", 00:17:44.708 "is_configured": true, 00:17:44.708 "data_offset": 2048, 00:17:44.708 "data_size": 63488 00:17:44.708 }, 00:17:44.708 { 00:17:44.708 "name": "BaseBdev3", 00:17:44.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.708 "is_configured": false, 00:17:44.708 "data_offset": 0, 00:17:44.708 "data_size": 0 00:17:44.708 }, 00:17:44.708 { 00:17:44.708 "name": "BaseBdev4", 00:17:44.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.708 "is_configured": false, 00:17:44.708 "data_offset": 0, 00:17:44.708 "data_size": 0 00:17:44.708 } 00:17:44.708 ] 00:17:44.708 }' 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.708 13:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.276 [2024-12-06 13:12:32.187104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.276 BaseBdev3 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.276 [ 00:17:45.276 { 00:17:45.276 "name": "BaseBdev3", 00:17:45.276 "aliases": [ 00:17:45.276 "68cf8511-f346-4875-aa02-6fad58336c75" 00:17:45.276 ], 00:17:45.276 "product_name": "Malloc disk", 00:17:45.276 "block_size": 512, 00:17:45.276 "num_blocks": 65536, 00:17:45.276 "uuid": "68cf8511-f346-4875-aa02-6fad58336c75", 00:17:45.276 "assigned_rate_limits": { 00:17:45.276 "rw_ios_per_sec": 0, 00:17:45.276 "rw_mbytes_per_sec": 0, 00:17:45.276 "r_mbytes_per_sec": 0, 00:17:45.276 "w_mbytes_per_sec": 0 00:17:45.276 }, 00:17:45.276 "claimed": true, 00:17:45.276 "claim_type": "exclusive_write", 00:17:45.276 "zoned": false, 00:17:45.276 "supported_io_types": { 00:17:45.276 "read": true, 00:17:45.276 "write": true, 00:17:45.276 "unmap": true, 00:17:45.276 "flush": true, 00:17:45.276 "reset": true, 00:17:45.276 "nvme_admin": false, 00:17:45.276 "nvme_io": false, 00:17:45.276 "nvme_io_md": false, 00:17:45.276 "write_zeroes": true, 00:17:45.276 "zcopy": true, 00:17:45.276 "get_zone_info": false, 00:17:45.276 "zone_management": false, 00:17:45.276 "zone_append": false, 00:17:45.276 "compare": false, 00:17:45.276 "compare_and_write": false, 00:17:45.276 "abort": true, 00:17:45.276 "seek_hole": false, 00:17:45.276 "seek_data": false, 00:17:45.276 "copy": true, 00:17:45.276 "nvme_iov_md": false 00:17:45.276 }, 00:17:45.276 "memory_domains": [ 00:17:45.276 { 00:17:45.276 "dma_device_id": "system", 00:17:45.276 "dma_device_type": 1 00:17:45.276 }, 00:17:45.276 { 00:17:45.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.276 "dma_device_type": 2 00:17:45.276 } 00:17:45.276 ], 00:17:45.276 "driver_specific": {} 00:17:45.276 } 00:17:45.276 ] 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.276 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.535 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.535 "name": "Existed_Raid", 00:17:45.535 "uuid": "904e04a0-4024-4f1d-8d80-4a63daa142c0", 00:17:45.535 "strip_size_kb": 0, 00:17:45.535 "state": "configuring", 00:17:45.535 "raid_level": "raid1", 00:17:45.535 "superblock": true, 00:17:45.535 "num_base_bdevs": 4, 00:17:45.535 "num_base_bdevs_discovered": 3, 00:17:45.535 "num_base_bdevs_operational": 4, 00:17:45.535 "base_bdevs_list": [ 00:17:45.535 { 00:17:45.535 "name": "BaseBdev1", 00:17:45.535 "uuid": "d28f5235-1bc3-48a4-b393-c1d54ee79491", 00:17:45.535 "is_configured": true, 00:17:45.535 "data_offset": 2048, 00:17:45.535 "data_size": 63488 00:17:45.535 }, 00:17:45.535 { 00:17:45.535 "name": "BaseBdev2", 00:17:45.535 "uuid": "2335edc5-9a6d-4751-84c8-2160f3ff34f0", 00:17:45.535 "is_configured": true, 00:17:45.535 "data_offset": 2048, 00:17:45.535 "data_size": 63488 00:17:45.535 }, 00:17:45.535 { 00:17:45.535 "name": "BaseBdev3", 00:17:45.535 "uuid": "68cf8511-f346-4875-aa02-6fad58336c75", 00:17:45.535 "is_configured": true, 00:17:45.535 "data_offset": 2048, 00:17:45.535 "data_size": 63488 00:17:45.535 }, 00:17:45.535 { 00:17:45.535 "name": "BaseBdev4", 00:17:45.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.535 "is_configured": false, 00:17:45.535 "data_offset": 0, 00:17:45.535 "data_size": 0 00:17:45.535 } 00:17:45.535 ] 00:17:45.535 }' 00:17:45.535 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.535 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.793 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:45.793 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.793 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.051 [2024-12-06 13:12:32.843147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:46.051 [2024-12-06 13:12:32.843556] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:46.052 [2024-12-06 13:12:32.843579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:46.052 [2024-12-06 13:12:32.843953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:46.052 BaseBdev4 00:17:46.052 [2024-12-06 13:12:32.844364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:46.052 [2024-12-06 13:12:32.844396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:46.052 [2024-12-06 13:12:32.844630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.052 [ 00:17:46.052 { 00:17:46.052 "name": "BaseBdev4", 00:17:46.052 "aliases": [ 00:17:46.052 "e63bc2de-3614-4ee6-90e4-474cbac5e296" 00:17:46.052 ], 00:17:46.052 "product_name": "Malloc disk", 00:17:46.052 "block_size": 512, 00:17:46.052 "num_blocks": 65536, 00:17:46.052 "uuid": "e63bc2de-3614-4ee6-90e4-474cbac5e296", 00:17:46.052 "assigned_rate_limits": { 00:17:46.052 "rw_ios_per_sec": 0, 00:17:46.052 "rw_mbytes_per_sec": 0, 00:17:46.052 "r_mbytes_per_sec": 0, 00:17:46.052 "w_mbytes_per_sec": 0 00:17:46.052 }, 00:17:46.052 "claimed": true, 00:17:46.052 "claim_type": "exclusive_write", 00:17:46.052 "zoned": false, 00:17:46.052 "supported_io_types": { 00:17:46.052 "read": true, 00:17:46.052 "write": true, 00:17:46.052 "unmap": true, 00:17:46.052 "flush": true, 00:17:46.052 "reset": true, 00:17:46.052 "nvme_admin": false, 00:17:46.052 "nvme_io": false, 00:17:46.052 "nvme_io_md": false, 00:17:46.052 "write_zeroes": true, 00:17:46.052 "zcopy": true, 00:17:46.052 "get_zone_info": false, 00:17:46.052 "zone_management": false, 00:17:46.052 "zone_append": false, 00:17:46.052 "compare": false, 00:17:46.052 "compare_and_write": false, 00:17:46.052 "abort": true, 00:17:46.052 "seek_hole": false, 00:17:46.052 "seek_data": false, 00:17:46.052 "copy": true, 00:17:46.052 "nvme_iov_md": false 00:17:46.052 }, 00:17:46.052 "memory_domains": [ 00:17:46.052 { 00:17:46.052 "dma_device_id": "system", 00:17:46.052 "dma_device_type": 1 00:17:46.052 }, 00:17:46.052 { 00:17:46.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.052 "dma_device_type": 2 00:17:46.052 } 00:17:46.052 ], 00:17:46.052 "driver_specific": {} 00:17:46.052 } 00:17:46.052 ] 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.052 "name": "Existed_Raid", 00:17:46.052 "uuid": "904e04a0-4024-4f1d-8d80-4a63daa142c0", 00:17:46.052 "strip_size_kb": 0, 00:17:46.052 "state": "online", 00:17:46.052 "raid_level": "raid1", 00:17:46.052 "superblock": true, 00:17:46.052 "num_base_bdevs": 4, 00:17:46.052 "num_base_bdevs_discovered": 4, 00:17:46.052 "num_base_bdevs_operational": 4, 00:17:46.052 "base_bdevs_list": [ 00:17:46.052 { 00:17:46.052 "name": "BaseBdev1", 00:17:46.052 "uuid": "d28f5235-1bc3-48a4-b393-c1d54ee79491", 00:17:46.052 "is_configured": true, 00:17:46.052 "data_offset": 2048, 00:17:46.052 "data_size": 63488 00:17:46.052 }, 00:17:46.052 { 00:17:46.052 "name": "BaseBdev2", 00:17:46.052 "uuid": "2335edc5-9a6d-4751-84c8-2160f3ff34f0", 00:17:46.052 "is_configured": true, 00:17:46.052 "data_offset": 2048, 00:17:46.052 "data_size": 63488 00:17:46.052 }, 00:17:46.052 { 00:17:46.052 "name": "BaseBdev3", 00:17:46.052 "uuid": "68cf8511-f346-4875-aa02-6fad58336c75", 00:17:46.052 "is_configured": true, 00:17:46.052 "data_offset": 2048, 00:17:46.052 "data_size": 63488 00:17:46.052 }, 00:17:46.052 { 00:17:46.052 "name": "BaseBdev4", 00:17:46.052 "uuid": "e63bc2de-3614-4ee6-90e4-474cbac5e296", 00:17:46.052 "is_configured": true, 00:17:46.052 "data_offset": 2048, 00:17:46.052 "data_size": 63488 00:17:46.052 } 00:17:46.052 ] 00:17:46.052 }' 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.052 13:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.619 [2024-12-06 13:12:33.407942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.619 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:46.619 "name": "Existed_Raid", 00:17:46.619 "aliases": [ 00:17:46.619 "904e04a0-4024-4f1d-8d80-4a63daa142c0" 00:17:46.619 ], 00:17:46.619 "product_name": "Raid Volume", 00:17:46.619 "block_size": 512, 00:17:46.619 "num_blocks": 63488, 00:17:46.619 "uuid": "904e04a0-4024-4f1d-8d80-4a63daa142c0", 00:17:46.619 "assigned_rate_limits": { 00:17:46.619 "rw_ios_per_sec": 0, 00:17:46.619 "rw_mbytes_per_sec": 0, 00:17:46.619 "r_mbytes_per_sec": 0, 00:17:46.619 "w_mbytes_per_sec": 0 00:17:46.619 }, 00:17:46.619 "claimed": false, 00:17:46.619 "zoned": false, 00:17:46.619 "supported_io_types": { 00:17:46.619 "read": true, 00:17:46.619 "write": true, 00:17:46.619 "unmap": false, 00:17:46.619 "flush": false, 00:17:46.619 "reset": true, 00:17:46.619 "nvme_admin": false, 00:17:46.619 "nvme_io": false, 00:17:46.619 "nvme_io_md": false, 00:17:46.619 "write_zeroes": true, 00:17:46.619 "zcopy": false, 00:17:46.619 "get_zone_info": false, 00:17:46.619 "zone_management": false, 00:17:46.619 "zone_append": false, 00:17:46.619 "compare": false, 00:17:46.619 "compare_and_write": false, 00:17:46.619 "abort": false, 00:17:46.619 "seek_hole": false, 00:17:46.619 "seek_data": false, 00:17:46.619 "copy": false, 00:17:46.619 "nvme_iov_md": false 00:17:46.619 }, 00:17:46.619 "memory_domains": [ 00:17:46.619 { 00:17:46.619 "dma_device_id": "system", 00:17:46.619 "dma_device_type": 1 00:17:46.619 }, 00:17:46.619 { 00:17:46.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.619 "dma_device_type": 2 00:17:46.619 }, 00:17:46.619 { 00:17:46.619 "dma_device_id": "system", 00:17:46.619 "dma_device_type": 1 00:17:46.619 }, 00:17:46.619 { 00:17:46.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.619 "dma_device_type": 2 00:17:46.619 }, 00:17:46.619 { 00:17:46.619 "dma_device_id": "system", 00:17:46.619 "dma_device_type": 1 00:17:46.619 }, 00:17:46.619 { 00:17:46.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.619 "dma_device_type": 2 00:17:46.619 }, 00:17:46.619 { 00:17:46.619 "dma_device_id": "system", 00:17:46.619 "dma_device_type": 1 00:17:46.619 }, 00:17:46.619 { 00:17:46.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.620 "dma_device_type": 2 00:17:46.620 } 00:17:46.620 ], 00:17:46.620 "driver_specific": { 00:17:46.620 "raid": { 00:17:46.620 "uuid": "904e04a0-4024-4f1d-8d80-4a63daa142c0", 00:17:46.620 "strip_size_kb": 0, 00:17:46.620 "state": "online", 00:17:46.620 "raid_level": "raid1", 00:17:46.620 "superblock": true, 00:17:46.620 "num_base_bdevs": 4, 00:17:46.620 "num_base_bdevs_discovered": 4, 00:17:46.620 "num_base_bdevs_operational": 4, 00:17:46.620 "base_bdevs_list": [ 00:17:46.620 { 00:17:46.620 "name": "BaseBdev1", 00:17:46.620 "uuid": "d28f5235-1bc3-48a4-b393-c1d54ee79491", 00:17:46.620 "is_configured": true, 00:17:46.620 "data_offset": 2048, 00:17:46.620 "data_size": 63488 00:17:46.620 }, 00:17:46.620 { 00:17:46.620 "name": "BaseBdev2", 00:17:46.620 "uuid": "2335edc5-9a6d-4751-84c8-2160f3ff34f0", 00:17:46.620 "is_configured": true, 00:17:46.620 "data_offset": 2048, 00:17:46.620 "data_size": 63488 00:17:46.620 }, 00:17:46.620 { 00:17:46.620 "name": "BaseBdev3", 00:17:46.620 "uuid": "68cf8511-f346-4875-aa02-6fad58336c75", 00:17:46.620 "is_configured": true, 00:17:46.620 "data_offset": 2048, 00:17:46.620 "data_size": 63488 00:17:46.620 }, 00:17:46.620 { 00:17:46.620 "name": "BaseBdev4", 00:17:46.620 "uuid": "e63bc2de-3614-4ee6-90e4-474cbac5e296", 00:17:46.620 "is_configured": true, 00:17:46.620 "data_offset": 2048, 00:17:46.620 "data_size": 63488 00:17:46.620 } 00:17:46.620 ] 00:17:46.620 } 00:17:46.620 } 00:17:46.620 }' 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:46.620 BaseBdev2 00:17:46.620 BaseBdev3 00:17:46.620 BaseBdev4' 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.620 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.879 [2024-12-06 13:12:33.787674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.879 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.138 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.138 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.138 "name": "Existed_Raid", 00:17:47.138 "uuid": "904e04a0-4024-4f1d-8d80-4a63daa142c0", 00:17:47.138 "strip_size_kb": 0, 00:17:47.138 "state": "online", 00:17:47.138 "raid_level": "raid1", 00:17:47.138 "superblock": true, 00:17:47.138 "num_base_bdevs": 4, 00:17:47.138 "num_base_bdevs_discovered": 3, 00:17:47.138 "num_base_bdevs_operational": 3, 00:17:47.138 "base_bdevs_list": [ 00:17:47.138 { 00:17:47.138 "name": null, 00:17:47.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.138 "is_configured": false, 00:17:47.138 "data_offset": 0, 00:17:47.138 "data_size": 63488 00:17:47.138 }, 00:17:47.138 { 00:17:47.138 "name": "BaseBdev2", 00:17:47.138 "uuid": "2335edc5-9a6d-4751-84c8-2160f3ff34f0", 00:17:47.138 "is_configured": true, 00:17:47.138 "data_offset": 2048, 00:17:47.138 "data_size": 63488 00:17:47.138 }, 00:17:47.138 { 00:17:47.138 "name": "BaseBdev3", 00:17:47.138 "uuid": "68cf8511-f346-4875-aa02-6fad58336c75", 00:17:47.138 "is_configured": true, 00:17:47.138 "data_offset": 2048, 00:17:47.138 "data_size": 63488 00:17:47.138 }, 00:17:47.138 { 00:17:47.138 "name": "BaseBdev4", 00:17:47.138 "uuid": "e63bc2de-3614-4ee6-90e4-474cbac5e296", 00:17:47.138 "is_configured": true, 00:17:47.138 "data_offset": 2048, 00:17:47.138 "data_size": 63488 00:17:47.138 } 00:17:47.138 ] 00:17:47.138 }' 00:17:47.138 13:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.138 13:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.398 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:47.398 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.398 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.398 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.398 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.398 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.656 [2024-12-06 13:12:34.466046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.656 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.656 [2024-12-06 13:12:34.619373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:47.915 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.915 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:47.915 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.916 [2024-12-06 13:12:34.771940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:47.916 [2024-12-06 13:12:34.772393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.916 [2024-12-06 13:12:34.861163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.916 [2024-12-06 13:12:34.861568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.916 [2024-12-06 13:12:34.861781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.916 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.175 BaseBdev2 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.175 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.175 [ 00:17:48.175 { 00:17:48.175 "name": "BaseBdev2", 00:17:48.175 "aliases": [ 00:17:48.175 "3d9b6df2-1d23-44ec-85ff-0bd413df2222" 00:17:48.175 ], 00:17:48.175 "product_name": "Malloc disk", 00:17:48.175 "block_size": 512, 00:17:48.175 "num_blocks": 65536, 00:17:48.175 "uuid": "3d9b6df2-1d23-44ec-85ff-0bd413df2222", 00:17:48.175 "assigned_rate_limits": { 00:17:48.175 "rw_ios_per_sec": 0, 00:17:48.175 "rw_mbytes_per_sec": 0, 00:17:48.175 "r_mbytes_per_sec": 0, 00:17:48.175 "w_mbytes_per_sec": 0 00:17:48.175 }, 00:17:48.175 "claimed": false, 00:17:48.175 "zoned": false, 00:17:48.175 "supported_io_types": { 00:17:48.175 "read": true, 00:17:48.175 "write": true, 00:17:48.175 "unmap": true, 00:17:48.175 "flush": true, 00:17:48.175 "reset": true, 00:17:48.175 "nvme_admin": false, 00:17:48.175 "nvme_io": false, 00:17:48.175 "nvme_io_md": false, 00:17:48.175 "write_zeroes": true, 00:17:48.175 "zcopy": true, 00:17:48.175 "get_zone_info": false, 00:17:48.175 "zone_management": false, 00:17:48.175 "zone_append": false, 00:17:48.175 "compare": false, 00:17:48.175 "compare_and_write": false, 00:17:48.175 "abort": true, 00:17:48.175 "seek_hole": false, 00:17:48.175 "seek_data": false, 00:17:48.175 "copy": true, 00:17:48.175 "nvme_iov_md": false 00:17:48.175 }, 00:17:48.175 "memory_domains": [ 00:17:48.175 { 00:17:48.175 "dma_device_id": "system", 00:17:48.175 "dma_device_type": 1 00:17:48.175 }, 00:17:48.175 { 00:17:48.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.176 "dma_device_type": 2 00:17:48.176 } 00:17:48.176 ], 00:17:48.176 "driver_specific": {} 00:17:48.176 } 00:17:48.176 ] 00:17:48.176 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.176 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:48.176 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.176 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.176 13:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:48.176 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.176 13:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.176 BaseBdev3 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.176 [ 00:17:48.176 { 00:17:48.176 "name": "BaseBdev3", 00:17:48.176 "aliases": [ 00:17:48.176 "f0504bd6-6364-4665-b4a9-e2c62bfd5f41" 00:17:48.176 ], 00:17:48.176 "product_name": "Malloc disk", 00:17:48.176 "block_size": 512, 00:17:48.176 "num_blocks": 65536, 00:17:48.176 "uuid": "f0504bd6-6364-4665-b4a9-e2c62bfd5f41", 00:17:48.176 "assigned_rate_limits": { 00:17:48.176 "rw_ios_per_sec": 0, 00:17:48.176 "rw_mbytes_per_sec": 0, 00:17:48.176 "r_mbytes_per_sec": 0, 00:17:48.176 "w_mbytes_per_sec": 0 00:17:48.176 }, 00:17:48.176 "claimed": false, 00:17:48.176 "zoned": false, 00:17:48.176 "supported_io_types": { 00:17:48.176 "read": true, 00:17:48.176 "write": true, 00:17:48.176 "unmap": true, 00:17:48.176 "flush": true, 00:17:48.176 "reset": true, 00:17:48.176 "nvme_admin": false, 00:17:48.176 "nvme_io": false, 00:17:48.176 "nvme_io_md": false, 00:17:48.176 "write_zeroes": true, 00:17:48.176 "zcopy": true, 00:17:48.176 "get_zone_info": false, 00:17:48.176 "zone_management": false, 00:17:48.176 "zone_append": false, 00:17:48.176 "compare": false, 00:17:48.176 "compare_and_write": false, 00:17:48.176 "abort": true, 00:17:48.176 "seek_hole": false, 00:17:48.176 "seek_data": false, 00:17:48.176 "copy": true, 00:17:48.176 "nvme_iov_md": false 00:17:48.176 }, 00:17:48.176 "memory_domains": [ 00:17:48.176 { 00:17:48.176 "dma_device_id": "system", 00:17:48.176 "dma_device_type": 1 00:17:48.176 }, 00:17:48.176 { 00:17:48.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.176 "dma_device_type": 2 00:17:48.176 } 00:17:48.176 ], 00:17:48.176 "driver_specific": {} 00:17:48.176 } 00:17:48.176 ] 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.176 BaseBdev4 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.176 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.176 [ 00:17:48.176 { 00:17:48.176 "name": "BaseBdev4", 00:17:48.176 "aliases": [ 00:17:48.176 "893309eb-dea3-44da-bb34-205f66d39c1e" 00:17:48.176 ], 00:17:48.176 "product_name": "Malloc disk", 00:17:48.176 "block_size": 512, 00:17:48.176 "num_blocks": 65536, 00:17:48.176 "uuid": "893309eb-dea3-44da-bb34-205f66d39c1e", 00:17:48.176 "assigned_rate_limits": { 00:17:48.176 "rw_ios_per_sec": 0, 00:17:48.176 "rw_mbytes_per_sec": 0, 00:17:48.176 "r_mbytes_per_sec": 0, 00:17:48.176 "w_mbytes_per_sec": 0 00:17:48.176 }, 00:17:48.176 "claimed": false, 00:17:48.176 "zoned": false, 00:17:48.176 "supported_io_types": { 00:17:48.176 "read": true, 00:17:48.176 "write": true, 00:17:48.176 "unmap": true, 00:17:48.176 "flush": true, 00:17:48.176 "reset": true, 00:17:48.176 "nvme_admin": false, 00:17:48.176 "nvme_io": false, 00:17:48.176 "nvme_io_md": false, 00:17:48.176 "write_zeroes": true, 00:17:48.176 "zcopy": true, 00:17:48.176 "get_zone_info": false, 00:17:48.176 "zone_management": false, 00:17:48.176 "zone_append": false, 00:17:48.176 "compare": false, 00:17:48.176 "compare_and_write": false, 00:17:48.176 "abort": true, 00:17:48.176 "seek_hole": false, 00:17:48.176 "seek_data": false, 00:17:48.176 "copy": true, 00:17:48.176 "nvme_iov_md": false 00:17:48.176 }, 00:17:48.176 "memory_domains": [ 00:17:48.177 { 00:17:48.177 "dma_device_id": "system", 00:17:48.177 "dma_device_type": 1 00:17:48.177 }, 00:17:48.177 { 00:17:48.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.177 "dma_device_type": 2 00:17:48.177 } 00:17:48.177 ], 00:17:48.177 "driver_specific": {} 00:17:48.177 } 00:17:48.177 ] 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.177 [2024-12-06 13:12:35.173748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.177 [2024-12-06 13:12:35.173867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.177 [2024-12-06 13:12:35.173906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.177 [2024-12-06 13:12:35.176851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:48.177 [2024-12-06 13:12:35.176922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.177 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.435 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.435 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.435 "name": "Existed_Raid", 00:17:48.435 "uuid": "192aed49-3447-4045-8726-f6faf420e383", 00:17:48.435 "strip_size_kb": 0, 00:17:48.435 "state": "configuring", 00:17:48.435 "raid_level": "raid1", 00:17:48.435 "superblock": true, 00:17:48.435 "num_base_bdevs": 4, 00:17:48.435 "num_base_bdevs_discovered": 3, 00:17:48.435 "num_base_bdevs_operational": 4, 00:17:48.435 "base_bdevs_list": [ 00:17:48.435 { 00:17:48.435 "name": "BaseBdev1", 00:17:48.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.435 "is_configured": false, 00:17:48.435 "data_offset": 0, 00:17:48.435 "data_size": 0 00:17:48.435 }, 00:17:48.435 { 00:17:48.435 "name": "BaseBdev2", 00:17:48.435 "uuid": "3d9b6df2-1d23-44ec-85ff-0bd413df2222", 00:17:48.435 "is_configured": true, 00:17:48.435 "data_offset": 2048, 00:17:48.435 "data_size": 63488 00:17:48.435 }, 00:17:48.435 { 00:17:48.435 "name": "BaseBdev3", 00:17:48.435 "uuid": "f0504bd6-6364-4665-b4a9-e2c62bfd5f41", 00:17:48.435 "is_configured": true, 00:17:48.435 "data_offset": 2048, 00:17:48.435 "data_size": 63488 00:17:48.435 }, 00:17:48.435 { 00:17:48.435 "name": "BaseBdev4", 00:17:48.435 "uuid": "893309eb-dea3-44da-bb34-205f66d39c1e", 00:17:48.435 "is_configured": true, 00:17:48.435 "data_offset": 2048, 00:17:48.435 "data_size": 63488 00:17:48.435 } 00:17:48.435 ] 00:17:48.435 }' 00:17:48.435 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.435 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.001 [2024-12-06 13:12:35.746026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.001 "name": "Existed_Raid", 00:17:49.001 "uuid": "192aed49-3447-4045-8726-f6faf420e383", 00:17:49.001 "strip_size_kb": 0, 00:17:49.001 "state": "configuring", 00:17:49.001 "raid_level": "raid1", 00:17:49.001 "superblock": true, 00:17:49.001 "num_base_bdevs": 4, 00:17:49.001 "num_base_bdevs_discovered": 2, 00:17:49.001 "num_base_bdevs_operational": 4, 00:17:49.001 "base_bdevs_list": [ 00:17:49.001 { 00:17:49.001 "name": "BaseBdev1", 00:17:49.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.001 "is_configured": false, 00:17:49.001 "data_offset": 0, 00:17:49.001 "data_size": 0 00:17:49.001 }, 00:17:49.001 { 00:17:49.001 "name": null, 00:17:49.001 "uuid": "3d9b6df2-1d23-44ec-85ff-0bd413df2222", 00:17:49.001 "is_configured": false, 00:17:49.001 "data_offset": 0, 00:17:49.001 "data_size": 63488 00:17:49.001 }, 00:17:49.001 { 00:17:49.001 "name": "BaseBdev3", 00:17:49.001 "uuid": "f0504bd6-6364-4665-b4a9-e2c62bfd5f41", 00:17:49.001 "is_configured": true, 00:17:49.001 "data_offset": 2048, 00:17:49.001 "data_size": 63488 00:17:49.001 }, 00:17:49.001 { 00:17:49.001 "name": "BaseBdev4", 00:17:49.001 "uuid": "893309eb-dea3-44da-bb34-205f66d39c1e", 00:17:49.001 "is_configured": true, 00:17:49.001 "data_offset": 2048, 00:17:49.001 "data_size": 63488 00:17:49.001 } 00:17:49.001 ] 00:17:49.001 }' 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.001 13:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.568 [2024-12-06 13:12:36.376121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.568 BaseBdev1 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.568 [ 00:17:49.568 { 00:17:49.568 "name": "BaseBdev1", 00:17:49.568 "aliases": [ 00:17:49.568 "05ccca18-6a67-44d2-acfa-24229ad40f9f" 00:17:49.568 ], 00:17:49.568 "product_name": "Malloc disk", 00:17:49.568 "block_size": 512, 00:17:49.568 "num_blocks": 65536, 00:17:49.568 "uuid": "05ccca18-6a67-44d2-acfa-24229ad40f9f", 00:17:49.568 "assigned_rate_limits": { 00:17:49.568 "rw_ios_per_sec": 0, 00:17:49.568 "rw_mbytes_per_sec": 0, 00:17:49.568 "r_mbytes_per_sec": 0, 00:17:49.568 "w_mbytes_per_sec": 0 00:17:49.568 }, 00:17:49.568 "claimed": true, 00:17:49.568 "claim_type": "exclusive_write", 00:17:49.568 "zoned": false, 00:17:49.568 "supported_io_types": { 00:17:49.568 "read": true, 00:17:49.568 "write": true, 00:17:49.568 "unmap": true, 00:17:49.568 "flush": true, 00:17:49.568 "reset": true, 00:17:49.568 "nvme_admin": false, 00:17:49.568 "nvme_io": false, 00:17:49.568 "nvme_io_md": false, 00:17:49.568 "write_zeroes": true, 00:17:49.568 "zcopy": true, 00:17:49.568 "get_zone_info": false, 00:17:49.568 "zone_management": false, 00:17:49.568 "zone_append": false, 00:17:49.568 "compare": false, 00:17:49.568 "compare_and_write": false, 00:17:49.568 "abort": true, 00:17:49.568 "seek_hole": false, 00:17:49.568 "seek_data": false, 00:17:49.568 "copy": true, 00:17:49.568 "nvme_iov_md": false 00:17:49.568 }, 00:17:49.568 "memory_domains": [ 00:17:49.568 { 00:17:49.568 "dma_device_id": "system", 00:17:49.568 "dma_device_type": 1 00:17:49.568 }, 00:17:49.568 { 00:17:49.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.568 "dma_device_type": 2 00:17:49.568 } 00:17:49.568 ], 00:17:49.568 "driver_specific": {} 00:17:49.568 } 00:17:49.568 ] 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:49.568 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.569 "name": "Existed_Raid", 00:17:49.569 "uuid": "192aed49-3447-4045-8726-f6faf420e383", 00:17:49.569 "strip_size_kb": 0, 00:17:49.569 "state": "configuring", 00:17:49.569 "raid_level": "raid1", 00:17:49.569 "superblock": true, 00:17:49.569 "num_base_bdevs": 4, 00:17:49.569 "num_base_bdevs_discovered": 3, 00:17:49.569 "num_base_bdevs_operational": 4, 00:17:49.569 "base_bdevs_list": [ 00:17:49.569 { 00:17:49.569 "name": "BaseBdev1", 00:17:49.569 "uuid": "05ccca18-6a67-44d2-acfa-24229ad40f9f", 00:17:49.569 "is_configured": true, 00:17:49.569 "data_offset": 2048, 00:17:49.569 "data_size": 63488 00:17:49.569 }, 00:17:49.569 { 00:17:49.569 "name": null, 00:17:49.569 "uuid": "3d9b6df2-1d23-44ec-85ff-0bd413df2222", 00:17:49.569 "is_configured": false, 00:17:49.569 "data_offset": 0, 00:17:49.569 "data_size": 63488 00:17:49.569 }, 00:17:49.569 { 00:17:49.569 "name": "BaseBdev3", 00:17:49.569 "uuid": "f0504bd6-6364-4665-b4a9-e2c62bfd5f41", 00:17:49.569 "is_configured": true, 00:17:49.569 "data_offset": 2048, 00:17:49.569 "data_size": 63488 00:17:49.569 }, 00:17:49.569 { 00:17:49.569 "name": "BaseBdev4", 00:17:49.569 "uuid": "893309eb-dea3-44da-bb34-205f66d39c1e", 00:17:49.569 "is_configured": true, 00:17:49.569 "data_offset": 2048, 00:17:49.569 "data_size": 63488 00:17:49.569 } 00:17:49.569 ] 00:17:49.569 }' 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.569 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.135 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:50.135 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.135 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.135 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.135 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.135 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:50.135 13:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:50.135 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.135 13:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.135 [2024-12-06 13:12:37.004452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.135 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.135 "name": "Existed_Raid", 00:17:50.135 "uuid": "192aed49-3447-4045-8726-f6faf420e383", 00:17:50.135 "strip_size_kb": 0, 00:17:50.135 "state": "configuring", 00:17:50.135 "raid_level": "raid1", 00:17:50.135 "superblock": true, 00:17:50.135 "num_base_bdevs": 4, 00:17:50.135 "num_base_bdevs_discovered": 2, 00:17:50.135 "num_base_bdevs_operational": 4, 00:17:50.135 "base_bdevs_list": [ 00:17:50.135 { 00:17:50.135 "name": "BaseBdev1", 00:17:50.135 "uuid": "05ccca18-6a67-44d2-acfa-24229ad40f9f", 00:17:50.135 "is_configured": true, 00:17:50.135 "data_offset": 2048, 00:17:50.135 "data_size": 63488 00:17:50.135 }, 00:17:50.136 { 00:17:50.136 "name": null, 00:17:50.136 "uuid": "3d9b6df2-1d23-44ec-85ff-0bd413df2222", 00:17:50.136 "is_configured": false, 00:17:50.136 "data_offset": 0, 00:17:50.136 "data_size": 63488 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "name": null, 00:17:50.136 "uuid": "f0504bd6-6364-4665-b4a9-e2c62bfd5f41", 00:17:50.136 "is_configured": false, 00:17:50.136 "data_offset": 0, 00:17:50.136 "data_size": 63488 00:17:50.136 }, 00:17:50.136 { 00:17:50.136 "name": "BaseBdev4", 00:17:50.136 "uuid": "893309eb-dea3-44da-bb34-205f66d39c1e", 00:17:50.136 "is_configured": true, 00:17:50.136 "data_offset": 2048, 00:17:50.136 "data_size": 63488 00:17:50.136 } 00:17:50.136 ] 00:17:50.136 }' 00:17:50.136 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.136 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.701 [2024-12-06 13:12:37.632593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.701 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.702 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.702 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.702 "name": "Existed_Raid", 00:17:50.702 "uuid": "192aed49-3447-4045-8726-f6faf420e383", 00:17:50.702 "strip_size_kb": 0, 00:17:50.702 "state": "configuring", 00:17:50.702 "raid_level": "raid1", 00:17:50.702 "superblock": true, 00:17:50.702 "num_base_bdevs": 4, 00:17:50.702 "num_base_bdevs_discovered": 3, 00:17:50.702 "num_base_bdevs_operational": 4, 00:17:50.702 "base_bdevs_list": [ 00:17:50.702 { 00:17:50.702 "name": "BaseBdev1", 00:17:50.702 "uuid": "05ccca18-6a67-44d2-acfa-24229ad40f9f", 00:17:50.702 "is_configured": true, 00:17:50.702 "data_offset": 2048, 00:17:50.702 "data_size": 63488 00:17:50.702 }, 00:17:50.702 { 00:17:50.702 "name": null, 00:17:50.702 "uuid": "3d9b6df2-1d23-44ec-85ff-0bd413df2222", 00:17:50.702 "is_configured": false, 00:17:50.702 "data_offset": 0, 00:17:50.702 "data_size": 63488 00:17:50.702 }, 00:17:50.702 { 00:17:50.702 "name": "BaseBdev3", 00:17:50.702 "uuid": "f0504bd6-6364-4665-b4a9-e2c62bfd5f41", 00:17:50.702 "is_configured": true, 00:17:50.702 "data_offset": 2048, 00:17:50.702 "data_size": 63488 00:17:50.702 }, 00:17:50.702 { 00:17:50.702 "name": "BaseBdev4", 00:17:50.702 "uuid": "893309eb-dea3-44da-bb34-205f66d39c1e", 00:17:50.702 "is_configured": true, 00:17:50.702 "data_offset": 2048, 00:17:50.702 "data_size": 63488 00:17:50.702 } 00:17:50.702 ] 00:17:50.702 }' 00:17:50.702 13:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.702 13:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.266 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.266 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.266 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.266 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:51.266 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.266 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:51.266 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:51.266 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.266 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.266 [2024-12-06 13:12:38.228891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.543 "name": "Existed_Raid", 00:17:51.543 "uuid": "192aed49-3447-4045-8726-f6faf420e383", 00:17:51.543 "strip_size_kb": 0, 00:17:51.543 "state": "configuring", 00:17:51.543 "raid_level": "raid1", 00:17:51.543 "superblock": true, 00:17:51.543 "num_base_bdevs": 4, 00:17:51.543 "num_base_bdevs_discovered": 2, 00:17:51.543 "num_base_bdevs_operational": 4, 00:17:51.543 "base_bdevs_list": [ 00:17:51.543 { 00:17:51.543 "name": null, 00:17:51.543 "uuid": "05ccca18-6a67-44d2-acfa-24229ad40f9f", 00:17:51.543 "is_configured": false, 00:17:51.543 "data_offset": 0, 00:17:51.543 "data_size": 63488 00:17:51.543 }, 00:17:51.543 { 00:17:51.543 "name": null, 00:17:51.543 "uuid": "3d9b6df2-1d23-44ec-85ff-0bd413df2222", 00:17:51.543 "is_configured": false, 00:17:51.543 "data_offset": 0, 00:17:51.543 "data_size": 63488 00:17:51.543 }, 00:17:51.543 { 00:17:51.543 "name": "BaseBdev3", 00:17:51.543 "uuid": "f0504bd6-6364-4665-b4a9-e2c62bfd5f41", 00:17:51.543 "is_configured": true, 00:17:51.543 "data_offset": 2048, 00:17:51.543 "data_size": 63488 00:17:51.543 }, 00:17:51.543 { 00:17:51.543 "name": "BaseBdev4", 00:17:51.543 "uuid": "893309eb-dea3-44da-bb34-205f66d39c1e", 00:17:51.543 "is_configured": true, 00:17:51.543 "data_offset": 2048, 00:17:51.543 "data_size": 63488 00:17:51.543 } 00:17:51.543 ] 00:17:51.543 }' 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.543 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.115 [2024-12-06 13:12:38.922317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.115 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.115 "name": "Existed_Raid", 00:17:52.115 "uuid": "192aed49-3447-4045-8726-f6faf420e383", 00:17:52.115 "strip_size_kb": 0, 00:17:52.115 "state": "configuring", 00:17:52.115 "raid_level": "raid1", 00:17:52.115 "superblock": true, 00:17:52.115 "num_base_bdevs": 4, 00:17:52.115 "num_base_bdevs_discovered": 3, 00:17:52.115 "num_base_bdevs_operational": 4, 00:17:52.115 "base_bdevs_list": [ 00:17:52.115 { 00:17:52.115 "name": null, 00:17:52.115 "uuid": "05ccca18-6a67-44d2-acfa-24229ad40f9f", 00:17:52.115 "is_configured": false, 00:17:52.115 "data_offset": 0, 00:17:52.115 "data_size": 63488 00:17:52.115 }, 00:17:52.115 { 00:17:52.115 "name": "BaseBdev2", 00:17:52.115 "uuid": "3d9b6df2-1d23-44ec-85ff-0bd413df2222", 00:17:52.115 "is_configured": true, 00:17:52.115 "data_offset": 2048, 00:17:52.116 "data_size": 63488 00:17:52.116 }, 00:17:52.116 { 00:17:52.116 "name": "BaseBdev3", 00:17:52.116 "uuid": "f0504bd6-6364-4665-b4a9-e2c62bfd5f41", 00:17:52.116 "is_configured": true, 00:17:52.116 "data_offset": 2048, 00:17:52.116 "data_size": 63488 00:17:52.116 }, 00:17:52.116 { 00:17:52.116 "name": "BaseBdev4", 00:17:52.116 "uuid": "893309eb-dea3-44da-bb34-205f66d39c1e", 00:17:52.116 "is_configured": true, 00:17:52.116 "data_offset": 2048, 00:17:52.116 "data_size": 63488 00:17:52.116 } 00:17:52.116 ] 00:17:52.116 }' 00:17:52.116 13:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.116 13:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 05ccca18-6a67-44d2-acfa-24229ad40f9f 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.683 [2024-12-06 13:12:39.612886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:52.683 [2024-12-06 13:12:39.613257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:52.683 [2024-12-06 13:12:39.613283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:52.683 NewBaseBdev 00:17:52.683 [2024-12-06 13:12:39.613642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:52.683 [2024-12-06 13:12:39.613862] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:52.683 [2024-12-06 13:12:39.613880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:52.683 [2024-12-06 13:12:39.614073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.683 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.684 [ 00:17:52.684 { 00:17:52.684 "name": "NewBaseBdev", 00:17:52.684 "aliases": [ 00:17:52.684 "05ccca18-6a67-44d2-acfa-24229ad40f9f" 00:17:52.684 ], 00:17:52.684 "product_name": "Malloc disk", 00:17:52.684 "block_size": 512, 00:17:52.684 "num_blocks": 65536, 00:17:52.684 "uuid": "05ccca18-6a67-44d2-acfa-24229ad40f9f", 00:17:52.684 "assigned_rate_limits": { 00:17:52.684 "rw_ios_per_sec": 0, 00:17:52.684 "rw_mbytes_per_sec": 0, 00:17:52.684 "r_mbytes_per_sec": 0, 00:17:52.684 "w_mbytes_per_sec": 0 00:17:52.684 }, 00:17:52.684 "claimed": true, 00:17:52.684 "claim_type": "exclusive_write", 00:17:52.684 "zoned": false, 00:17:52.684 "supported_io_types": { 00:17:52.684 "read": true, 00:17:52.684 "write": true, 00:17:52.684 "unmap": true, 00:17:52.684 "flush": true, 00:17:52.684 "reset": true, 00:17:52.684 "nvme_admin": false, 00:17:52.684 "nvme_io": false, 00:17:52.684 "nvme_io_md": false, 00:17:52.684 "write_zeroes": true, 00:17:52.684 "zcopy": true, 00:17:52.684 "get_zone_info": false, 00:17:52.684 "zone_management": false, 00:17:52.684 "zone_append": false, 00:17:52.684 "compare": false, 00:17:52.684 "compare_and_write": false, 00:17:52.684 "abort": true, 00:17:52.684 "seek_hole": false, 00:17:52.684 "seek_data": false, 00:17:52.684 "copy": true, 00:17:52.684 "nvme_iov_md": false 00:17:52.684 }, 00:17:52.684 "memory_domains": [ 00:17:52.684 { 00:17:52.684 "dma_device_id": "system", 00:17:52.684 "dma_device_type": 1 00:17:52.684 }, 00:17:52.684 { 00:17:52.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.684 "dma_device_type": 2 00:17:52.684 } 00:17:52.684 ], 00:17:52.684 "driver_specific": {} 00:17:52.684 } 00:17:52.684 ] 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.684 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.942 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.942 "name": "Existed_Raid", 00:17:52.942 "uuid": "192aed49-3447-4045-8726-f6faf420e383", 00:17:52.942 "strip_size_kb": 0, 00:17:52.942 "state": "online", 00:17:52.942 "raid_level": "raid1", 00:17:52.942 "superblock": true, 00:17:52.942 "num_base_bdevs": 4, 00:17:52.942 "num_base_bdevs_discovered": 4, 00:17:52.942 "num_base_bdevs_operational": 4, 00:17:52.942 "base_bdevs_list": [ 00:17:52.942 { 00:17:52.942 "name": "NewBaseBdev", 00:17:52.942 "uuid": "05ccca18-6a67-44d2-acfa-24229ad40f9f", 00:17:52.942 "is_configured": true, 00:17:52.942 "data_offset": 2048, 00:17:52.942 "data_size": 63488 00:17:52.942 }, 00:17:52.942 { 00:17:52.942 "name": "BaseBdev2", 00:17:52.942 "uuid": "3d9b6df2-1d23-44ec-85ff-0bd413df2222", 00:17:52.942 "is_configured": true, 00:17:52.942 "data_offset": 2048, 00:17:52.942 "data_size": 63488 00:17:52.942 }, 00:17:52.942 { 00:17:52.942 "name": "BaseBdev3", 00:17:52.942 "uuid": "f0504bd6-6364-4665-b4a9-e2c62bfd5f41", 00:17:52.942 "is_configured": true, 00:17:52.942 "data_offset": 2048, 00:17:52.942 "data_size": 63488 00:17:52.942 }, 00:17:52.942 { 00:17:52.942 "name": "BaseBdev4", 00:17:52.942 "uuid": "893309eb-dea3-44da-bb34-205f66d39c1e", 00:17:52.942 "is_configured": true, 00:17:52.942 "data_offset": 2048, 00:17:52.943 "data_size": 63488 00:17:52.943 } 00:17:52.943 ] 00:17:52.943 }' 00:17:52.943 13:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.943 13:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.202 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:53.202 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:53.202 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:53.202 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:53.202 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:53.202 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:53.202 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:53.202 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:53.202 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.202 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.202 [2024-12-06 13:12:40.181606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.202 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.461 "name": "Existed_Raid", 00:17:53.461 "aliases": [ 00:17:53.461 "192aed49-3447-4045-8726-f6faf420e383" 00:17:53.461 ], 00:17:53.461 "product_name": "Raid Volume", 00:17:53.461 "block_size": 512, 00:17:53.461 "num_blocks": 63488, 00:17:53.461 "uuid": "192aed49-3447-4045-8726-f6faf420e383", 00:17:53.461 "assigned_rate_limits": { 00:17:53.461 "rw_ios_per_sec": 0, 00:17:53.461 "rw_mbytes_per_sec": 0, 00:17:53.461 "r_mbytes_per_sec": 0, 00:17:53.461 "w_mbytes_per_sec": 0 00:17:53.461 }, 00:17:53.461 "claimed": false, 00:17:53.461 "zoned": false, 00:17:53.461 "supported_io_types": { 00:17:53.461 "read": true, 00:17:53.461 "write": true, 00:17:53.461 "unmap": false, 00:17:53.461 "flush": false, 00:17:53.461 "reset": true, 00:17:53.461 "nvme_admin": false, 00:17:53.461 "nvme_io": false, 00:17:53.461 "nvme_io_md": false, 00:17:53.461 "write_zeroes": true, 00:17:53.461 "zcopy": false, 00:17:53.461 "get_zone_info": false, 00:17:53.461 "zone_management": false, 00:17:53.461 "zone_append": false, 00:17:53.461 "compare": false, 00:17:53.461 "compare_and_write": false, 00:17:53.461 "abort": false, 00:17:53.461 "seek_hole": false, 00:17:53.461 "seek_data": false, 00:17:53.461 "copy": false, 00:17:53.461 "nvme_iov_md": false 00:17:53.461 }, 00:17:53.461 "memory_domains": [ 00:17:53.461 { 00:17:53.461 "dma_device_id": "system", 00:17:53.461 "dma_device_type": 1 00:17:53.461 }, 00:17:53.461 { 00:17:53.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.461 "dma_device_type": 2 00:17:53.461 }, 00:17:53.461 { 00:17:53.461 "dma_device_id": "system", 00:17:53.461 "dma_device_type": 1 00:17:53.461 }, 00:17:53.461 { 00:17:53.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.461 "dma_device_type": 2 00:17:53.461 }, 00:17:53.461 { 00:17:53.461 "dma_device_id": "system", 00:17:53.461 "dma_device_type": 1 00:17:53.461 }, 00:17:53.461 { 00:17:53.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.461 "dma_device_type": 2 00:17:53.461 }, 00:17:53.461 { 00:17:53.461 "dma_device_id": "system", 00:17:53.461 "dma_device_type": 1 00:17:53.461 }, 00:17:53.461 { 00:17:53.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.461 "dma_device_type": 2 00:17:53.461 } 00:17:53.461 ], 00:17:53.461 "driver_specific": { 00:17:53.461 "raid": { 00:17:53.461 "uuid": "192aed49-3447-4045-8726-f6faf420e383", 00:17:53.461 "strip_size_kb": 0, 00:17:53.461 "state": "online", 00:17:53.461 "raid_level": "raid1", 00:17:53.461 "superblock": true, 00:17:53.461 "num_base_bdevs": 4, 00:17:53.461 "num_base_bdevs_discovered": 4, 00:17:53.461 "num_base_bdevs_operational": 4, 00:17:53.461 "base_bdevs_list": [ 00:17:53.461 { 00:17:53.461 "name": "NewBaseBdev", 00:17:53.461 "uuid": "05ccca18-6a67-44d2-acfa-24229ad40f9f", 00:17:53.461 "is_configured": true, 00:17:53.461 "data_offset": 2048, 00:17:53.461 "data_size": 63488 00:17:53.461 }, 00:17:53.461 { 00:17:53.461 "name": "BaseBdev2", 00:17:53.461 "uuid": "3d9b6df2-1d23-44ec-85ff-0bd413df2222", 00:17:53.461 "is_configured": true, 00:17:53.461 "data_offset": 2048, 00:17:53.461 "data_size": 63488 00:17:53.461 }, 00:17:53.461 { 00:17:53.461 "name": "BaseBdev3", 00:17:53.461 "uuid": "f0504bd6-6364-4665-b4a9-e2c62bfd5f41", 00:17:53.461 "is_configured": true, 00:17:53.461 "data_offset": 2048, 00:17:53.461 "data_size": 63488 00:17:53.461 }, 00:17:53.461 { 00:17:53.461 "name": "BaseBdev4", 00:17:53.461 "uuid": "893309eb-dea3-44da-bb34-205f66d39c1e", 00:17:53.461 "is_configured": true, 00:17:53.461 "data_offset": 2048, 00:17:53.461 "data_size": 63488 00:17:53.461 } 00:17:53.461 ] 00:17:53.461 } 00:17:53.461 } 00:17:53.461 }' 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:53.461 BaseBdev2 00:17:53.461 BaseBdev3 00:17:53.461 BaseBdev4' 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.461 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.720 [2024-12-06 13:12:40.553238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.720 [2024-12-06 13:12:40.553319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.720 [2024-12-06 13:12:40.553468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.720 [2024-12-06 13:12:40.553949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.720 [2024-12-06 13:12:40.553984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74269 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74269 ']' 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74269 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74269 00:17:53.720 killing process with pid 74269 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74269' 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74269 00:17:53.720 [2024-12-06 13:12:40.596412] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.720 13:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74269 00:17:53.979 [2024-12-06 13:12:40.979784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.356 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:55.357 00:17:55.357 real 0m13.432s 00:17:55.357 user 0m22.036s 00:17:55.357 sys 0m2.062s 00:17:55.357 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.357 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.357 ************************************ 00:17:55.357 END TEST raid_state_function_test_sb 00:17:55.357 ************************************ 00:17:55.357 13:12:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:17:55.357 13:12:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:55.357 13:12:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.357 13:12:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.357 ************************************ 00:17:55.357 START TEST raid_superblock_test 00:17:55.357 ************************************ 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74957 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74957 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74957 ']' 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.357 13:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.357 [2024-12-06 13:12:42.298889] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:17:55.357 [2024-12-06 13:12:42.299095] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74957 ] 00:17:55.616 [2024-12-06 13:12:42.490108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.875 [2024-12-06 13:12:42.639144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.875 [2024-12-06 13:12:42.871516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.875 [2024-12-06 13:12:42.871625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.442 malloc1 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.442 [2024-12-06 13:12:43.375734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:56.442 [2024-12-06 13:12:43.375866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.442 [2024-12-06 13:12:43.375900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:56.442 [2024-12-06 13:12:43.375916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.442 [2024-12-06 13:12:43.379230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.442 [2024-12-06 13:12:43.379292] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:56.442 pt1 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.442 malloc2 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.442 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.442 [2024-12-06 13:12:43.438207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:56.442 [2024-12-06 13:12:43.438314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.443 [2024-12-06 13:12:43.438351] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:56.443 [2024-12-06 13:12:43.438367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.443 [2024-12-06 13:12:43.441443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.443 [2024-12-06 13:12:43.441514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:56.443 pt2 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.443 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.701 malloc3 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.701 [2024-12-06 13:12:43.502734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:56.701 [2024-12-06 13:12:43.502881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.701 [2024-12-06 13:12:43.502922] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:56.701 [2024-12-06 13:12:43.502939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.701 [2024-12-06 13:12:43.506136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.701 [2024-12-06 13:12:43.506195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:56.701 pt3 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.701 malloc4 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.701 [2024-12-06 13:12:43.563561] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:56.701 [2024-12-06 13:12:43.563672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.701 [2024-12-06 13:12:43.563706] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:56.701 [2024-12-06 13:12:43.563722] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.701 [2024-12-06 13:12:43.566911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.701 [2024-12-06 13:12:43.566962] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:56.701 pt4 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.701 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.701 [2024-12-06 13:12:43.575668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:56.702 [2024-12-06 13:12:43.578322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.702 [2024-12-06 13:12:43.578433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:56.702 [2024-12-06 13:12:43.578559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:56.702 [2024-12-06 13:12:43.578864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:56.702 [2024-12-06 13:12:43.578901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:56.702 [2024-12-06 13:12:43.579257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:56.702 [2024-12-06 13:12:43.579542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:56.702 [2024-12-06 13:12:43.579576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:56.702 [2024-12-06 13:12:43.579818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.702 "name": "raid_bdev1", 00:17:56.702 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:17:56.702 "strip_size_kb": 0, 00:17:56.702 "state": "online", 00:17:56.702 "raid_level": "raid1", 00:17:56.702 "superblock": true, 00:17:56.702 "num_base_bdevs": 4, 00:17:56.702 "num_base_bdevs_discovered": 4, 00:17:56.702 "num_base_bdevs_operational": 4, 00:17:56.702 "base_bdevs_list": [ 00:17:56.702 { 00:17:56.702 "name": "pt1", 00:17:56.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.702 "is_configured": true, 00:17:56.702 "data_offset": 2048, 00:17:56.702 "data_size": 63488 00:17:56.702 }, 00:17:56.702 { 00:17:56.702 "name": "pt2", 00:17:56.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.702 "is_configured": true, 00:17:56.702 "data_offset": 2048, 00:17:56.702 "data_size": 63488 00:17:56.702 }, 00:17:56.702 { 00:17:56.702 "name": "pt3", 00:17:56.702 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:56.702 "is_configured": true, 00:17:56.702 "data_offset": 2048, 00:17:56.702 "data_size": 63488 00:17:56.702 }, 00:17:56.702 { 00:17:56.702 "name": "pt4", 00:17:56.702 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:56.702 "is_configured": true, 00:17:56.702 "data_offset": 2048, 00:17:56.702 "data_size": 63488 00:17:56.702 } 00:17:56.702 ] 00:17:56.702 }' 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.702 13:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.268 [2024-12-06 13:12:44.124383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.268 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.268 "name": "raid_bdev1", 00:17:57.268 "aliases": [ 00:17:57.268 "f5e230c6-4609-48cc-b9b9-5f714e259cb8" 00:17:57.268 ], 00:17:57.268 "product_name": "Raid Volume", 00:17:57.268 "block_size": 512, 00:17:57.268 "num_blocks": 63488, 00:17:57.268 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:17:57.268 "assigned_rate_limits": { 00:17:57.268 "rw_ios_per_sec": 0, 00:17:57.268 "rw_mbytes_per_sec": 0, 00:17:57.268 "r_mbytes_per_sec": 0, 00:17:57.268 "w_mbytes_per_sec": 0 00:17:57.268 }, 00:17:57.268 "claimed": false, 00:17:57.268 "zoned": false, 00:17:57.268 "supported_io_types": { 00:17:57.268 "read": true, 00:17:57.268 "write": true, 00:17:57.268 "unmap": false, 00:17:57.268 "flush": false, 00:17:57.268 "reset": true, 00:17:57.268 "nvme_admin": false, 00:17:57.268 "nvme_io": false, 00:17:57.268 "nvme_io_md": false, 00:17:57.268 "write_zeroes": true, 00:17:57.269 "zcopy": false, 00:17:57.269 "get_zone_info": false, 00:17:57.269 "zone_management": false, 00:17:57.269 "zone_append": false, 00:17:57.269 "compare": false, 00:17:57.269 "compare_and_write": false, 00:17:57.269 "abort": false, 00:17:57.269 "seek_hole": false, 00:17:57.269 "seek_data": false, 00:17:57.269 "copy": false, 00:17:57.269 "nvme_iov_md": false 00:17:57.269 }, 00:17:57.269 "memory_domains": [ 00:17:57.269 { 00:17:57.269 "dma_device_id": "system", 00:17:57.269 "dma_device_type": 1 00:17:57.269 }, 00:17:57.269 { 00:17:57.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.269 "dma_device_type": 2 00:17:57.269 }, 00:17:57.269 { 00:17:57.269 "dma_device_id": "system", 00:17:57.269 "dma_device_type": 1 00:17:57.269 }, 00:17:57.269 { 00:17:57.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.269 "dma_device_type": 2 00:17:57.269 }, 00:17:57.269 { 00:17:57.269 "dma_device_id": "system", 00:17:57.269 "dma_device_type": 1 00:17:57.269 }, 00:17:57.269 { 00:17:57.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.269 "dma_device_type": 2 00:17:57.269 }, 00:17:57.269 { 00:17:57.269 "dma_device_id": "system", 00:17:57.269 "dma_device_type": 1 00:17:57.269 }, 00:17:57.269 { 00:17:57.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.269 "dma_device_type": 2 00:17:57.269 } 00:17:57.269 ], 00:17:57.269 "driver_specific": { 00:17:57.269 "raid": { 00:17:57.269 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:17:57.269 "strip_size_kb": 0, 00:17:57.269 "state": "online", 00:17:57.269 "raid_level": "raid1", 00:17:57.269 "superblock": true, 00:17:57.269 "num_base_bdevs": 4, 00:17:57.269 "num_base_bdevs_discovered": 4, 00:17:57.269 "num_base_bdevs_operational": 4, 00:17:57.269 "base_bdevs_list": [ 00:17:57.269 { 00:17:57.269 "name": "pt1", 00:17:57.269 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.269 "is_configured": true, 00:17:57.269 "data_offset": 2048, 00:17:57.269 "data_size": 63488 00:17:57.269 }, 00:17:57.269 { 00:17:57.269 "name": "pt2", 00:17:57.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.269 "is_configured": true, 00:17:57.269 "data_offset": 2048, 00:17:57.269 "data_size": 63488 00:17:57.269 }, 00:17:57.269 { 00:17:57.269 "name": "pt3", 00:17:57.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.269 "is_configured": true, 00:17:57.269 "data_offset": 2048, 00:17:57.269 "data_size": 63488 00:17:57.269 }, 00:17:57.269 { 00:17:57.269 "name": "pt4", 00:17:57.269 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:57.269 "is_configured": true, 00:17:57.269 "data_offset": 2048, 00:17:57.269 "data_size": 63488 00:17:57.269 } 00:17:57.269 ] 00:17:57.269 } 00:17:57.269 } 00:17:57.269 }' 00:17:57.269 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.269 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:57.269 pt2 00:17:57.269 pt3 00:17:57.269 pt4' 00:17:57.269 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.269 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:57.269 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.269 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:57.269 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.269 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.269 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:57.527 [2024-12-06 13:12:44.488488] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f5e230c6-4609-48cc-b9b9-5f714e259cb8 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f5e230c6-4609-48cc-b9b9-5f714e259cb8 ']' 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.527 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.527 [2024-12-06 13:12:44.540065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.527 [2024-12-06 13:12:44.540098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.527 [2024-12-06 13:12:44.540218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.527 [2024-12-06 13:12:44.540362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.527 [2024-12-06 13:12:44.540388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.785 [2024-12-06 13:12:44.700180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:57.785 [2024-12-06 13:12:44.702993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:57.785 [2024-12-06 13:12:44.703115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:57.785 [2024-12-06 13:12:44.703174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:57.785 [2024-12-06 13:12:44.703252] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:57.785 [2024-12-06 13:12:44.703380] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:57.785 [2024-12-06 13:12:44.703416] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:57.785 [2024-12-06 13:12:44.703450] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:57.785 [2024-12-06 13:12:44.703474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.785 [2024-12-06 13:12:44.703491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:57.785 request: 00:17:57.785 { 00:17:57.785 "name": "raid_bdev1", 00:17:57.785 "raid_level": "raid1", 00:17:57.785 "base_bdevs": [ 00:17:57.785 "malloc1", 00:17:57.785 "malloc2", 00:17:57.785 "malloc3", 00:17:57.785 "malloc4" 00:17:57.785 ], 00:17:57.785 "superblock": false, 00:17:57.785 "method": "bdev_raid_create", 00:17:57.785 "req_id": 1 00:17:57.785 } 00:17:57.785 Got JSON-RPC error response 00:17:57.785 response: 00:17:57.785 { 00:17:57.785 "code": -17, 00:17:57.785 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:57.785 } 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.785 [2024-12-06 13:12:44.764272] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.785 [2024-12-06 13:12:44.764407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.785 [2024-12-06 13:12:44.764438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:57.785 [2024-12-06 13:12:44.764455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.785 [2024-12-06 13:12:44.767853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.785 [2024-12-06 13:12:44.767955] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.785 [2024-12-06 13:12:44.768077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:57.785 [2024-12-06 13:12:44.768165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.785 pt1 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.785 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.786 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.043 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.043 "name": "raid_bdev1", 00:17:58.043 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:17:58.043 "strip_size_kb": 0, 00:17:58.043 "state": "configuring", 00:17:58.043 "raid_level": "raid1", 00:17:58.043 "superblock": true, 00:17:58.043 "num_base_bdevs": 4, 00:17:58.043 "num_base_bdevs_discovered": 1, 00:17:58.043 "num_base_bdevs_operational": 4, 00:17:58.043 "base_bdevs_list": [ 00:17:58.043 { 00:17:58.043 "name": "pt1", 00:17:58.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.043 "is_configured": true, 00:17:58.043 "data_offset": 2048, 00:17:58.043 "data_size": 63488 00:17:58.043 }, 00:17:58.043 { 00:17:58.043 "name": null, 00:17:58.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.043 "is_configured": false, 00:17:58.043 "data_offset": 2048, 00:17:58.043 "data_size": 63488 00:17:58.043 }, 00:17:58.043 { 00:17:58.043 "name": null, 00:17:58.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.043 "is_configured": false, 00:17:58.043 "data_offset": 2048, 00:17:58.043 "data_size": 63488 00:17:58.043 }, 00:17:58.043 { 00:17:58.043 "name": null, 00:17:58.043 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.043 "is_configured": false, 00:17:58.043 "data_offset": 2048, 00:17:58.043 "data_size": 63488 00:17:58.043 } 00:17:58.043 ] 00:17:58.043 }' 00:17:58.043 13:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.043 13:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.377 [2024-12-06 13:12:45.296656] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.377 [2024-12-06 13:12:45.296761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.377 [2024-12-06 13:12:45.296807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:58.377 [2024-12-06 13:12:45.296827] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.377 [2024-12-06 13:12:45.297499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.377 [2024-12-06 13:12:45.297565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.377 [2024-12-06 13:12:45.297690] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:58.377 [2024-12-06 13:12:45.297734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.377 pt2 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.377 [2024-12-06 13:12:45.304611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.377 "name": "raid_bdev1", 00:17:58.377 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:17:58.377 "strip_size_kb": 0, 00:17:58.377 "state": "configuring", 00:17:58.377 "raid_level": "raid1", 00:17:58.377 "superblock": true, 00:17:58.377 "num_base_bdevs": 4, 00:17:58.377 "num_base_bdevs_discovered": 1, 00:17:58.377 "num_base_bdevs_operational": 4, 00:17:58.377 "base_bdevs_list": [ 00:17:58.377 { 00:17:58.377 "name": "pt1", 00:17:58.377 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.377 "is_configured": true, 00:17:58.377 "data_offset": 2048, 00:17:58.377 "data_size": 63488 00:17:58.377 }, 00:17:58.377 { 00:17:58.377 "name": null, 00:17:58.377 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.377 "is_configured": false, 00:17:58.377 "data_offset": 0, 00:17:58.377 "data_size": 63488 00:17:58.377 }, 00:17:58.377 { 00:17:58.377 "name": null, 00:17:58.377 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.377 "is_configured": false, 00:17:58.377 "data_offset": 2048, 00:17:58.377 "data_size": 63488 00:17:58.377 }, 00:17:58.377 { 00:17:58.377 "name": null, 00:17:58.377 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.377 "is_configured": false, 00:17:58.377 "data_offset": 2048, 00:17:58.377 "data_size": 63488 00:17:58.377 } 00:17:58.377 ] 00:17:58.377 }' 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.377 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.950 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:58.950 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.950 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.950 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.950 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.951 [2024-12-06 13:12:45.876888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.951 [2024-12-06 13:12:45.876997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.951 [2024-12-06 13:12:45.877034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:58.951 [2024-12-06 13:12:45.877050] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.951 [2024-12-06 13:12:45.877740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.951 [2024-12-06 13:12:45.877777] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.951 [2024-12-06 13:12:45.877942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:58.951 [2024-12-06 13:12:45.877990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.951 pt2 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.951 [2024-12-06 13:12:45.888900] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:58.951 [2024-12-06 13:12:45.889021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.951 [2024-12-06 13:12:45.889057] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:58.951 [2024-12-06 13:12:45.889073] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.951 [2024-12-06 13:12:45.889755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.951 [2024-12-06 13:12:45.889814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:58.951 [2024-12-06 13:12:45.889980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:58.951 [2024-12-06 13:12:45.890032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:58.951 pt3 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.951 [2024-12-06 13:12:45.900790] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:58.951 [2024-12-06 13:12:45.900904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.951 [2024-12-06 13:12:45.900935] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:58.951 [2024-12-06 13:12:45.900949] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.951 [2024-12-06 13:12:45.901574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.951 [2024-12-06 13:12:45.901631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:58.951 [2024-12-06 13:12:45.901737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:58.951 [2024-12-06 13:12:45.901779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:58.951 [2024-12-06 13:12:45.901989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:58.951 [2024-12-06 13:12:45.902016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:58.951 [2024-12-06 13:12:45.902378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:58.951 [2024-12-06 13:12:45.902666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:58.951 [2024-12-06 13:12:45.902697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:58.951 [2024-12-06 13:12:45.902906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.951 pt4 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.951 "name": "raid_bdev1", 00:17:58.951 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:17:58.951 "strip_size_kb": 0, 00:17:58.951 "state": "online", 00:17:58.951 "raid_level": "raid1", 00:17:58.951 "superblock": true, 00:17:58.951 "num_base_bdevs": 4, 00:17:58.951 "num_base_bdevs_discovered": 4, 00:17:58.951 "num_base_bdevs_operational": 4, 00:17:58.951 "base_bdevs_list": [ 00:17:58.951 { 00:17:58.951 "name": "pt1", 00:17:58.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.951 "is_configured": true, 00:17:58.951 "data_offset": 2048, 00:17:58.951 "data_size": 63488 00:17:58.951 }, 00:17:58.951 { 00:17:58.951 "name": "pt2", 00:17:58.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.951 "is_configured": true, 00:17:58.951 "data_offset": 2048, 00:17:58.951 "data_size": 63488 00:17:58.951 }, 00:17:58.951 { 00:17:58.951 "name": "pt3", 00:17:58.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.951 "is_configured": true, 00:17:58.951 "data_offset": 2048, 00:17:58.951 "data_size": 63488 00:17:58.951 }, 00:17:58.951 { 00:17:58.951 "name": "pt4", 00:17:58.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.951 "is_configured": true, 00:17:58.951 "data_offset": 2048, 00:17:58.951 "data_size": 63488 00:17:58.951 } 00:17:58.951 ] 00:17:58.951 }' 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.951 13:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.516 [2024-12-06 13:12:46.453424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.516 "name": "raid_bdev1", 00:17:59.516 "aliases": [ 00:17:59.516 "f5e230c6-4609-48cc-b9b9-5f714e259cb8" 00:17:59.516 ], 00:17:59.516 "product_name": "Raid Volume", 00:17:59.516 "block_size": 512, 00:17:59.516 "num_blocks": 63488, 00:17:59.516 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:17:59.516 "assigned_rate_limits": { 00:17:59.516 "rw_ios_per_sec": 0, 00:17:59.516 "rw_mbytes_per_sec": 0, 00:17:59.516 "r_mbytes_per_sec": 0, 00:17:59.516 "w_mbytes_per_sec": 0 00:17:59.516 }, 00:17:59.516 "claimed": false, 00:17:59.516 "zoned": false, 00:17:59.516 "supported_io_types": { 00:17:59.516 "read": true, 00:17:59.516 "write": true, 00:17:59.516 "unmap": false, 00:17:59.516 "flush": false, 00:17:59.516 "reset": true, 00:17:59.516 "nvme_admin": false, 00:17:59.516 "nvme_io": false, 00:17:59.516 "nvme_io_md": false, 00:17:59.516 "write_zeroes": true, 00:17:59.516 "zcopy": false, 00:17:59.516 "get_zone_info": false, 00:17:59.516 "zone_management": false, 00:17:59.516 "zone_append": false, 00:17:59.516 "compare": false, 00:17:59.516 "compare_and_write": false, 00:17:59.516 "abort": false, 00:17:59.516 "seek_hole": false, 00:17:59.516 "seek_data": false, 00:17:59.516 "copy": false, 00:17:59.516 "nvme_iov_md": false 00:17:59.516 }, 00:17:59.516 "memory_domains": [ 00:17:59.516 { 00:17:59.516 "dma_device_id": "system", 00:17:59.516 "dma_device_type": 1 00:17:59.516 }, 00:17:59.516 { 00:17:59.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.516 "dma_device_type": 2 00:17:59.516 }, 00:17:59.516 { 00:17:59.516 "dma_device_id": "system", 00:17:59.516 "dma_device_type": 1 00:17:59.516 }, 00:17:59.516 { 00:17:59.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.516 "dma_device_type": 2 00:17:59.516 }, 00:17:59.516 { 00:17:59.516 "dma_device_id": "system", 00:17:59.516 "dma_device_type": 1 00:17:59.516 }, 00:17:59.516 { 00:17:59.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.516 "dma_device_type": 2 00:17:59.516 }, 00:17:59.516 { 00:17:59.516 "dma_device_id": "system", 00:17:59.516 "dma_device_type": 1 00:17:59.516 }, 00:17:59.516 { 00:17:59.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.516 "dma_device_type": 2 00:17:59.516 } 00:17:59.516 ], 00:17:59.516 "driver_specific": { 00:17:59.516 "raid": { 00:17:59.516 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:17:59.516 "strip_size_kb": 0, 00:17:59.516 "state": "online", 00:17:59.516 "raid_level": "raid1", 00:17:59.516 "superblock": true, 00:17:59.516 "num_base_bdevs": 4, 00:17:59.516 "num_base_bdevs_discovered": 4, 00:17:59.516 "num_base_bdevs_operational": 4, 00:17:59.516 "base_bdevs_list": [ 00:17:59.516 { 00:17:59.516 "name": "pt1", 00:17:59.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.516 "is_configured": true, 00:17:59.516 "data_offset": 2048, 00:17:59.516 "data_size": 63488 00:17:59.516 }, 00:17:59.516 { 00:17:59.516 "name": "pt2", 00:17:59.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.516 "is_configured": true, 00:17:59.516 "data_offset": 2048, 00:17:59.516 "data_size": 63488 00:17:59.516 }, 00:17:59.516 { 00:17:59.516 "name": "pt3", 00:17:59.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.516 "is_configured": true, 00:17:59.516 "data_offset": 2048, 00:17:59.516 "data_size": 63488 00:17:59.516 }, 00:17:59.516 { 00:17:59.516 "name": "pt4", 00:17:59.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.516 "is_configured": true, 00:17:59.516 "data_offset": 2048, 00:17:59.516 "data_size": 63488 00:17:59.516 } 00:17:59.516 ] 00:17:59.516 } 00:17:59.516 } 00:17:59.516 }' 00:17:59.516 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:59.774 pt2 00:17:59.774 pt3 00:17:59.774 pt4' 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.774 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:00.032 [2024-12-06 13:12:46.825533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f5e230c6-4609-48cc-b9b9-5f714e259cb8 '!=' f5e230c6-4609-48cc-b9b9-5f714e259cb8 ']' 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.032 [2024-12-06 13:12:46.877151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.032 "name": "raid_bdev1", 00:18:00.032 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:18:00.032 "strip_size_kb": 0, 00:18:00.032 "state": "online", 00:18:00.032 "raid_level": "raid1", 00:18:00.032 "superblock": true, 00:18:00.032 "num_base_bdevs": 4, 00:18:00.032 "num_base_bdevs_discovered": 3, 00:18:00.032 "num_base_bdevs_operational": 3, 00:18:00.032 "base_bdevs_list": [ 00:18:00.032 { 00:18:00.032 "name": null, 00:18:00.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.032 "is_configured": false, 00:18:00.032 "data_offset": 0, 00:18:00.032 "data_size": 63488 00:18:00.032 }, 00:18:00.032 { 00:18:00.032 "name": "pt2", 00:18:00.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.032 "is_configured": true, 00:18:00.032 "data_offset": 2048, 00:18:00.032 "data_size": 63488 00:18:00.032 }, 00:18:00.032 { 00:18:00.032 "name": "pt3", 00:18:00.032 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:00.032 "is_configured": true, 00:18:00.032 "data_offset": 2048, 00:18:00.032 "data_size": 63488 00:18:00.032 }, 00:18:00.032 { 00:18:00.032 "name": "pt4", 00:18:00.032 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:00.032 "is_configured": true, 00:18:00.032 "data_offset": 2048, 00:18:00.032 "data_size": 63488 00:18:00.032 } 00:18:00.032 ] 00:18:00.032 }' 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.032 13:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.600 [2024-12-06 13:12:47.385272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.600 [2024-12-06 13:12:47.385335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.600 [2024-12-06 13:12:47.385491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.600 [2024-12-06 13:12:47.385642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.600 [2024-12-06 13:12:47.385662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.600 [2024-12-06 13:12:47.477249] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.600 [2024-12-06 13:12:47.477361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.600 [2024-12-06 13:12:47.477408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:00.600 [2024-12-06 13:12:47.477422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.600 [2024-12-06 13:12:47.480910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.600 [2024-12-06 13:12:47.480967] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.600 [2024-12-06 13:12:47.481113] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:00.600 [2024-12-06 13:12:47.481176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.600 pt2 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.600 "name": "raid_bdev1", 00:18:00.600 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:18:00.600 "strip_size_kb": 0, 00:18:00.600 "state": "configuring", 00:18:00.600 "raid_level": "raid1", 00:18:00.600 "superblock": true, 00:18:00.600 "num_base_bdevs": 4, 00:18:00.600 "num_base_bdevs_discovered": 1, 00:18:00.600 "num_base_bdevs_operational": 3, 00:18:00.600 "base_bdevs_list": [ 00:18:00.600 { 00:18:00.600 "name": null, 00:18:00.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.600 "is_configured": false, 00:18:00.600 "data_offset": 2048, 00:18:00.600 "data_size": 63488 00:18:00.600 }, 00:18:00.600 { 00:18:00.600 "name": "pt2", 00:18:00.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.600 "is_configured": true, 00:18:00.600 "data_offset": 2048, 00:18:00.600 "data_size": 63488 00:18:00.600 }, 00:18:00.600 { 00:18:00.600 "name": null, 00:18:00.600 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:00.600 "is_configured": false, 00:18:00.600 "data_offset": 2048, 00:18:00.600 "data_size": 63488 00:18:00.600 }, 00:18:00.600 { 00:18:00.600 "name": null, 00:18:00.600 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:00.600 "is_configured": false, 00:18:00.600 "data_offset": 2048, 00:18:00.600 "data_size": 63488 00:18:00.600 } 00:18:00.600 ] 00:18:00.600 }' 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.600 13:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.168 [2024-12-06 13:12:48.009670] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:01.168 [2024-12-06 13:12:48.009779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.168 [2024-12-06 13:12:48.009819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:01.168 [2024-12-06 13:12:48.009835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.168 [2024-12-06 13:12:48.010573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.168 [2024-12-06 13:12:48.010612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:01.168 [2024-12-06 13:12:48.010764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:01.168 [2024-12-06 13:12:48.010805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:01.168 pt3 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.168 "name": "raid_bdev1", 00:18:01.168 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:18:01.168 "strip_size_kb": 0, 00:18:01.168 "state": "configuring", 00:18:01.168 "raid_level": "raid1", 00:18:01.168 "superblock": true, 00:18:01.168 "num_base_bdevs": 4, 00:18:01.168 "num_base_bdevs_discovered": 2, 00:18:01.168 "num_base_bdevs_operational": 3, 00:18:01.168 "base_bdevs_list": [ 00:18:01.168 { 00:18:01.168 "name": null, 00:18:01.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.168 "is_configured": false, 00:18:01.168 "data_offset": 2048, 00:18:01.168 "data_size": 63488 00:18:01.168 }, 00:18:01.168 { 00:18:01.168 "name": "pt2", 00:18:01.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.168 "is_configured": true, 00:18:01.168 "data_offset": 2048, 00:18:01.168 "data_size": 63488 00:18:01.168 }, 00:18:01.168 { 00:18:01.168 "name": "pt3", 00:18:01.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.168 "is_configured": true, 00:18:01.168 "data_offset": 2048, 00:18:01.168 "data_size": 63488 00:18:01.168 }, 00:18:01.168 { 00:18:01.168 "name": null, 00:18:01.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.168 "is_configured": false, 00:18:01.168 "data_offset": 2048, 00:18:01.168 "data_size": 63488 00:18:01.168 } 00:18:01.168 ] 00:18:01.168 }' 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.168 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.736 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:01.736 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:01.736 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:01.736 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:01.736 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.737 [2024-12-06 13:12:48.541818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:01.737 [2024-12-06 13:12:48.541989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.737 [2024-12-06 13:12:48.542030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:01.737 [2024-12-06 13:12:48.542046] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.737 [2024-12-06 13:12:48.542776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.737 [2024-12-06 13:12:48.542814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:01.737 [2024-12-06 13:12:48.542941] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:01.737 [2024-12-06 13:12:48.542993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:01.737 [2024-12-06 13:12:48.543180] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:01.737 [2024-12-06 13:12:48.543207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:01.737 [2024-12-06 13:12:48.543553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:01.737 [2024-12-06 13:12:48.543771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:01.737 [2024-12-06 13:12:48.543800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:01.737 [2024-12-06 13:12:48.543990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.737 pt4 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.737 "name": "raid_bdev1", 00:18:01.737 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:18:01.737 "strip_size_kb": 0, 00:18:01.737 "state": "online", 00:18:01.737 "raid_level": "raid1", 00:18:01.737 "superblock": true, 00:18:01.737 "num_base_bdevs": 4, 00:18:01.737 "num_base_bdevs_discovered": 3, 00:18:01.737 "num_base_bdevs_operational": 3, 00:18:01.737 "base_bdevs_list": [ 00:18:01.737 { 00:18:01.737 "name": null, 00:18:01.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.737 "is_configured": false, 00:18:01.737 "data_offset": 2048, 00:18:01.737 "data_size": 63488 00:18:01.737 }, 00:18:01.737 { 00:18:01.737 "name": "pt2", 00:18:01.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.737 "is_configured": true, 00:18:01.737 "data_offset": 2048, 00:18:01.737 "data_size": 63488 00:18:01.737 }, 00:18:01.737 { 00:18:01.737 "name": "pt3", 00:18:01.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.737 "is_configured": true, 00:18:01.737 "data_offset": 2048, 00:18:01.737 "data_size": 63488 00:18:01.737 }, 00:18:01.737 { 00:18:01.737 "name": "pt4", 00:18:01.737 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.737 "is_configured": true, 00:18:01.737 "data_offset": 2048, 00:18:01.737 "data_size": 63488 00:18:01.737 } 00:18:01.737 ] 00:18:01.737 }' 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.737 13:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.306 [2024-12-06 13:12:49.077987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.306 [2024-12-06 13:12:49.078027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.306 [2024-12-06 13:12:49.078163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.306 [2024-12-06 13:12:49.078320] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.306 [2024-12-06 13:12:49.078344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.306 [2024-12-06 13:12:49.153893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:02.306 [2024-12-06 13:12:49.153979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.306 [2024-12-06 13:12:49.154014] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:02.306 [2024-12-06 13:12:49.154036] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.306 [2024-12-06 13:12:49.157303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.306 [2024-12-06 13:12:49.157370] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:02.306 [2024-12-06 13:12:49.157522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:02.306 [2024-12-06 13:12:49.157594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:02.306 [2024-12-06 13:12:49.157784] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:02.306 [2024-12-06 13:12:49.157822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.306 [2024-12-06 13:12:49.157845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:02.306 [2024-12-06 13:12:49.157927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.306 [2024-12-06 13:12:49.158072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:02.306 pt1 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.306 "name": "raid_bdev1", 00:18:02.306 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:18:02.306 "strip_size_kb": 0, 00:18:02.306 "state": "configuring", 00:18:02.306 "raid_level": "raid1", 00:18:02.306 "superblock": true, 00:18:02.306 "num_base_bdevs": 4, 00:18:02.306 "num_base_bdevs_discovered": 2, 00:18:02.306 "num_base_bdevs_operational": 3, 00:18:02.306 "base_bdevs_list": [ 00:18:02.306 { 00:18:02.306 "name": null, 00:18:02.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.306 "is_configured": false, 00:18:02.306 "data_offset": 2048, 00:18:02.306 "data_size": 63488 00:18:02.306 }, 00:18:02.306 { 00:18:02.306 "name": "pt2", 00:18:02.306 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.306 "is_configured": true, 00:18:02.306 "data_offset": 2048, 00:18:02.306 "data_size": 63488 00:18:02.306 }, 00:18:02.306 { 00:18:02.306 "name": "pt3", 00:18:02.306 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.306 "is_configured": true, 00:18:02.306 "data_offset": 2048, 00:18:02.306 "data_size": 63488 00:18:02.306 }, 00:18:02.306 { 00:18:02.306 "name": null, 00:18:02.306 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.306 "is_configured": false, 00:18:02.306 "data_offset": 2048, 00:18:02.306 "data_size": 63488 00:18:02.306 } 00:18:02.306 ] 00:18:02.306 }' 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.306 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.874 [2024-12-06 13:12:49.730335] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:02.874 [2024-12-06 13:12:49.730454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.874 [2024-12-06 13:12:49.730539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:02.874 [2024-12-06 13:12:49.730559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.874 [2024-12-06 13:12:49.731342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.874 [2024-12-06 13:12:49.731375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:02.874 [2024-12-06 13:12:49.731546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:02.874 [2024-12-06 13:12:49.731586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:02.874 [2024-12-06 13:12:49.731774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:02.874 [2024-12-06 13:12:49.731791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:02.874 [2024-12-06 13:12:49.732204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:02.874 [2024-12-06 13:12:49.732411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:02.874 [2024-12-06 13:12:49.732448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:02.874 [2024-12-06 13:12:49.732668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.874 pt4 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.874 "name": "raid_bdev1", 00:18:02.874 "uuid": "f5e230c6-4609-48cc-b9b9-5f714e259cb8", 00:18:02.874 "strip_size_kb": 0, 00:18:02.874 "state": "online", 00:18:02.874 "raid_level": "raid1", 00:18:02.874 "superblock": true, 00:18:02.874 "num_base_bdevs": 4, 00:18:02.874 "num_base_bdevs_discovered": 3, 00:18:02.874 "num_base_bdevs_operational": 3, 00:18:02.874 "base_bdevs_list": [ 00:18:02.874 { 00:18:02.874 "name": null, 00:18:02.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.874 "is_configured": false, 00:18:02.874 "data_offset": 2048, 00:18:02.874 "data_size": 63488 00:18:02.874 }, 00:18:02.874 { 00:18:02.874 "name": "pt2", 00:18:02.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.874 "is_configured": true, 00:18:02.874 "data_offset": 2048, 00:18:02.874 "data_size": 63488 00:18:02.874 }, 00:18:02.874 { 00:18:02.874 "name": "pt3", 00:18:02.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.874 "is_configured": true, 00:18:02.874 "data_offset": 2048, 00:18:02.874 "data_size": 63488 00:18:02.874 }, 00:18:02.874 { 00:18:02.874 "name": "pt4", 00:18:02.874 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.874 "is_configured": true, 00:18:02.874 "data_offset": 2048, 00:18:02.874 "data_size": 63488 00:18:02.874 } 00:18:02.874 ] 00:18:02.874 }' 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.874 13:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.440 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:03.440 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:03.440 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.440 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.440 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.440 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:03.440 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.440 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.440 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:03.441 [2024-12-06 13:12:50.347121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f5e230c6-4609-48cc-b9b9-5f714e259cb8 '!=' f5e230c6-4609-48cc-b9b9-5f714e259cb8 ']' 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74957 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74957 ']' 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74957 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74957 00:18:03.441 killing process with pid 74957 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74957' 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74957 00:18:03.441 [2024-12-06 13:12:50.430827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.441 13:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74957 00:18:03.441 [2024-12-06 13:12:50.430982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.441 [2024-12-06 13:12:50.431125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.441 [2024-12-06 13:12:50.431146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:04.008 [2024-12-06 13:12:50.787062] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.947 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:04.947 ************************************ 00:18:04.947 END TEST raid_superblock_test 00:18:04.947 ************************************ 00:18:04.947 00:18:04.947 real 0m9.722s 00:18:04.947 user 0m15.856s 00:18:04.947 sys 0m1.521s 00:18:04.947 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.947 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.947 13:12:51 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:18:04.947 13:12:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:04.947 13:12:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.947 13:12:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.947 ************************************ 00:18:04.947 START TEST raid_read_error_test 00:18:04.947 ************************************ 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9Yow4QIHVW 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75456 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75456 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:05.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.206 13:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75456 ']' 00:18:05.207 13:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.207 13:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.207 13:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.207 13:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.207 13:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.207 [2024-12-06 13:12:52.097837] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:18:05.207 [2024-12-06 13:12:52.098028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75456 ] 00:18:05.470 [2024-12-06 13:12:52.288757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.470 [2024-12-06 13:12:52.433062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.739 [2024-12-06 13:12:52.656573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.739 [2024-12-06 13:12:52.656861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.306 BaseBdev1_malloc 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.306 true 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.306 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.306 [2024-12-06 13:12:53.220334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:06.306 [2024-12-06 13:12:53.220429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.306 [2024-12-06 13:12:53.220462] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:06.306 [2024-12-06 13:12:53.220506] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.306 [2024-12-06 13:12:53.223931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.306 [2024-12-06 13:12:53.224149] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.307 BaseBdev1 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.307 BaseBdev2_malloc 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.307 true 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.307 [2024-12-06 13:12:53.296326] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:06.307 [2024-12-06 13:12:53.296599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.307 [2024-12-06 13:12:53.296767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:06.307 [2024-12-06 13:12:53.296801] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.307 [2024-12-06 13:12:53.300237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.307 [2024-12-06 13:12:53.300305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:06.307 BaseBdev2 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.307 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.566 BaseBdev3_malloc 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.566 true 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.566 [2024-12-06 13:12:53.391126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:06.566 [2024-12-06 13:12:53.391385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.566 [2024-12-06 13:12:53.391458] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:06.566 [2024-12-06 13:12:53.391732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.566 [2024-12-06 13:12:53.394838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.566 [2024-12-06 13:12:53.394892] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:06.566 BaseBdev3 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.566 BaseBdev4_malloc 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.566 true 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.566 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.566 [2024-12-06 13:12:53.457978] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:06.566 [2024-12-06 13:12:53.458071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.566 [2024-12-06 13:12:53.458100] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:06.566 [2024-12-06 13:12:53.458117] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.566 [2024-12-06 13:12:53.461233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.566 [2024-12-06 13:12:53.461301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:06.566 BaseBdev4 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.567 [2024-12-06 13:12:53.470031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.567 [2024-12-06 13:12:53.472682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.567 [2024-12-06 13:12:53.472788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:06.567 [2024-12-06 13:12:53.472897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:06.567 [2024-12-06 13:12:53.473176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:06.567 [2024-12-06 13:12:53.473197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:06.567 [2024-12-06 13:12:53.473538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:06.567 [2024-12-06 13:12:53.473796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:06.567 [2024-12-06 13:12:53.473811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:06.567 [2024-12-06 13:12:53.474116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.567 "name": "raid_bdev1", 00:18:06.567 "uuid": "76d3593f-8870-4192-be57-dbc42a6ad9b8", 00:18:06.567 "strip_size_kb": 0, 00:18:06.567 "state": "online", 00:18:06.567 "raid_level": "raid1", 00:18:06.567 "superblock": true, 00:18:06.567 "num_base_bdevs": 4, 00:18:06.567 "num_base_bdevs_discovered": 4, 00:18:06.567 "num_base_bdevs_operational": 4, 00:18:06.567 "base_bdevs_list": [ 00:18:06.567 { 00:18:06.567 "name": "BaseBdev1", 00:18:06.567 "uuid": "294cf2c9-4641-5dce-a69e-ff88d0d8cca1", 00:18:06.567 "is_configured": true, 00:18:06.567 "data_offset": 2048, 00:18:06.567 "data_size": 63488 00:18:06.567 }, 00:18:06.567 { 00:18:06.567 "name": "BaseBdev2", 00:18:06.567 "uuid": "2e4bc673-4eb6-5e9e-92a5-9efbd8d19c6b", 00:18:06.567 "is_configured": true, 00:18:06.567 "data_offset": 2048, 00:18:06.567 "data_size": 63488 00:18:06.567 }, 00:18:06.567 { 00:18:06.567 "name": "BaseBdev3", 00:18:06.567 "uuid": "1fc3e79d-ebed-5b63-88bc-8c2686a733f8", 00:18:06.567 "is_configured": true, 00:18:06.567 "data_offset": 2048, 00:18:06.567 "data_size": 63488 00:18:06.567 }, 00:18:06.567 { 00:18:06.567 "name": "BaseBdev4", 00:18:06.567 "uuid": "e5e38c80-b5de-551d-ad03-5f2d11187e67", 00:18:06.567 "is_configured": true, 00:18:06.567 "data_offset": 2048, 00:18:06.567 "data_size": 63488 00:18:06.567 } 00:18:06.567 ] 00:18:06.567 }' 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.567 13:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.138 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:07.138 13:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:07.138 [2024-12-06 13:12:54.139905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.074 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.074 "name": "raid_bdev1", 00:18:08.074 "uuid": "76d3593f-8870-4192-be57-dbc42a6ad9b8", 00:18:08.074 "strip_size_kb": 0, 00:18:08.074 "state": "online", 00:18:08.074 "raid_level": "raid1", 00:18:08.074 "superblock": true, 00:18:08.074 "num_base_bdevs": 4, 00:18:08.074 "num_base_bdevs_discovered": 4, 00:18:08.074 "num_base_bdevs_operational": 4, 00:18:08.074 "base_bdevs_list": [ 00:18:08.074 { 00:18:08.074 "name": "BaseBdev1", 00:18:08.074 "uuid": "294cf2c9-4641-5dce-a69e-ff88d0d8cca1", 00:18:08.074 "is_configured": true, 00:18:08.074 "data_offset": 2048, 00:18:08.074 "data_size": 63488 00:18:08.074 }, 00:18:08.074 { 00:18:08.074 "name": "BaseBdev2", 00:18:08.074 "uuid": "2e4bc673-4eb6-5e9e-92a5-9efbd8d19c6b", 00:18:08.074 "is_configured": true, 00:18:08.074 "data_offset": 2048, 00:18:08.074 "data_size": 63488 00:18:08.074 }, 00:18:08.074 { 00:18:08.074 "name": "BaseBdev3", 00:18:08.074 "uuid": "1fc3e79d-ebed-5b63-88bc-8c2686a733f8", 00:18:08.074 "is_configured": true, 00:18:08.074 "data_offset": 2048, 00:18:08.074 "data_size": 63488 00:18:08.074 }, 00:18:08.074 { 00:18:08.074 "name": "BaseBdev4", 00:18:08.074 "uuid": "e5e38c80-b5de-551d-ad03-5f2d11187e67", 00:18:08.074 "is_configured": true, 00:18:08.074 "data_offset": 2048, 00:18:08.074 "data_size": 63488 00:18:08.074 } 00:18:08.074 ] 00:18:08.075 }' 00:18:08.075 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.075 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.639 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:08.639 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.639 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.639 [2024-12-06 13:12:55.556116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.639 [2024-12-06 13:12:55.556161] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.639 [2024-12-06 13:12:55.560202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.639 [2024-12-06 13:12:55.560518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.639 [2024-12-06 13:12:55.560825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.639 [2024-12-06 13:12:55.561036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:18:08.639 "results": [ 00:18:08.639 { 00:18:08.639 "job": "raid_bdev1", 00:18:08.639 "core_mask": "0x1", 00:18:08.639 "workload": "randrw", 00:18:08.639 "percentage": 50, 00:18:08.639 "status": "finished", 00:18:08.639 "queue_depth": 1, 00:18:08.639 "io_size": 131072, 00:18:08.639 "runtime": 1.413503, 00:18:08.639 "iops": 6484.59890074517, 00:18:08.639 "mibps": 810.5748625931462, 00:18:08.639 "io_failed": 0, 00:18:08.639 "io_timeout": 0, 00:18:08.639 "avg_latency_us": 149.87598278221884, 00:18:08.639 "min_latency_us": 40.261818181818185, 00:18:08.639 "max_latency_us": 2129.92 00:18:08.639 } 00:18:08.639 ], 00:18:08.639 "core_count": 1 00:18:08.639 } 00:18:08.639 te offline 00:18:08.639 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.639 13:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75456 00:18:08.639 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75456 ']' 00:18:08.639 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75456 00:18:08.639 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:18:08.639 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.639 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75456 00:18:08.639 killing process with pid 75456 00:18:08.639 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.640 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.640 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75456' 00:18:08.640 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75456 00:18:08.640 [2024-12-06 13:12:55.601471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.640 13:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75456 00:18:08.898 [2024-12-06 13:12:55.909467] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:10.275 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9Yow4QIHVW 00:18:10.275 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:10.275 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:10.275 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:18:10.275 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:18:10.275 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:10.275 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:10.275 13:12:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:10.275 00:18:10.275 real 0m5.105s 00:18:10.275 user 0m6.266s 00:18:10.275 sys 0m0.684s 00:18:10.275 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.275 ************************************ 00:18:10.275 END TEST raid_read_error_test 00:18:10.275 ************************************ 00:18:10.275 13:12:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.275 13:12:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:18:10.275 13:12:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:10.275 13:12:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.275 13:12:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:10.275 ************************************ 00:18:10.275 START TEST raid_write_error_test 00:18:10.275 ************************************ 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:10.275 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1oxL2AcJFQ 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75607 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75607 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75607 ']' 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.276 13:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.276 [2024-12-06 13:12:57.250880] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:18:10.276 [2024-12-06 13:12:57.251422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75607 ] 00:18:10.536 [2024-12-06 13:12:57.440993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.795 [2024-12-06 13:12:57.590215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.795 [2024-12-06 13:12:57.806241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.795 [2024-12-06 13:12:57.806332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.362 BaseBdev1_malloc 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.362 true 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.362 [2024-12-06 13:12:58.250949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:11.362 [2024-12-06 13:12:58.251061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.362 [2024-12-06 13:12:58.251110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:11.362 [2024-12-06 13:12:58.251144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.362 [2024-12-06 13:12:58.254125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.362 [2024-12-06 13:12:58.254191] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:11.362 BaseBdev1 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.362 BaseBdev2_malloc 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.362 true 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.362 [2024-12-06 13:12:58.313089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:11.362 [2024-12-06 13:12:58.313179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.362 [2024-12-06 13:12:58.313206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:11.362 [2024-12-06 13:12:58.313238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.362 [2024-12-06 13:12:58.316373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.362 [2024-12-06 13:12:58.316436] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:11.362 BaseBdev2 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.362 BaseBdev3_malloc 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:18:11.362 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.363 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.621 true 00:18:11.621 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.621 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:11.621 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.621 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.621 [2024-12-06 13:12:58.380635] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:11.621 [2024-12-06 13:12:58.380727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.621 [2024-12-06 13:12:58.380756] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:11.621 [2024-12-06 13:12:58.380774] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.621 [2024-12-06 13:12:58.383862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.621 [2024-12-06 13:12:58.383927] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:11.621 BaseBdev3 00:18:11.621 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.621 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:18:11.621 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:11.621 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.621 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.621 BaseBdev4_malloc 00:18:11.621 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.622 true 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.622 [2024-12-06 13:12:58.438961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:11.622 [2024-12-06 13:12:58.439279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.622 [2024-12-06 13:12:58.439328] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:11.622 [2024-12-06 13:12:58.439348] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.622 [2024-12-06 13:12:58.442504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.622 [2024-12-06 13:12:58.442775] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:11.622 BaseBdev4 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.622 [2024-12-06 13:12:58.447197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.622 [2024-12-06 13:12:58.449924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:11.622 [2024-12-06 13:12:58.450213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.622 [2024-12-06 13:12:58.450325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:11.622 [2024-12-06 13:12:58.450722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:11.622 [2024-12-06 13:12:58.450777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:11.622 [2024-12-06 13:12:58.451140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:18:11.622 [2024-12-06 13:12:58.451363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:11.622 [2024-12-06 13:12:58.451379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:11.622 [2024-12-06 13:12:58.451622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.622 "name": "raid_bdev1", 00:18:11.622 "uuid": "ed2f555a-dec4-4f58-abf4-50fe1e30ba67", 00:18:11.622 "strip_size_kb": 0, 00:18:11.622 "state": "online", 00:18:11.622 "raid_level": "raid1", 00:18:11.622 "superblock": true, 00:18:11.622 "num_base_bdevs": 4, 00:18:11.622 "num_base_bdevs_discovered": 4, 00:18:11.622 "num_base_bdevs_operational": 4, 00:18:11.622 "base_bdevs_list": [ 00:18:11.622 { 00:18:11.622 "name": "BaseBdev1", 00:18:11.622 "uuid": "38df7360-9adb-5dfa-9477-fbb03fb7c760", 00:18:11.622 "is_configured": true, 00:18:11.622 "data_offset": 2048, 00:18:11.622 "data_size": 63488 00:18:11.622 }, 00:18:11.622 { 00:18:11.622 "name": "BaseBdev2", 00:18:11.622 "uuid": "e44d84b9-1d67-5c24-817d-6f7a0974ecad", 00:18:11.622 "is_configured": true, 00:18:11.622 "data_offset": 2048, 00:18:11.622 "data_size": 63488 00:18:11.622 }, 00:18:11.622 { 00:18:11.622 "name": "BaseBdev3", 00:18:11.622 "uuid": "060591c7-8ec2-5a50-8642-7f67fd7da598", 00:18:11.622 "is_configured": true, 00:18:11.622 "data_offset": 2048, 00:18:11.622 "data_size": 63488 00:18:11.622 }, 00:18:11.622 { 00:18:11.622 "name": "BaseBdev4", 00:18:11.622 "uuid": "04e7b4bb-ad73-56db-954d-809ff2d5d076", 00:18:11.622 "is_configured": true, 00:18:11.622 "data_offset": 2048, 00:18:11.622 "data_size": 63488 00:18:11.622 } 00:18:11.622 ] 00:18:11.622 }' 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.622 13:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.189 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:18:12.189 13:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:12.189 [2024-12-06 13:12:59.105443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.125 [2024-12-06 13:12:59.976864] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:13.125 [2024-12-06 13:12:59.976955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:13.125 [2024-12-06 13:12:59.977285] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.125 13:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.125 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.125 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.125 "name": "raid_bdev1", 00:18:13.125 "uuid": "ed2f555a-dec4-4f58-abf4-50fe1e30ba67", 00:18:13.125 "strip_size_kb": 0, 00:18:13.125 "state": "online", 00:18:13.125 "raid_level": "raid1", 00:18:13.125 "superblock": true, 00:18:13.125 "num_base_bdevs": 4, 00:18:13.125 "num_base_bdevs_discovered": 3, 00:18:13.125 "num_base_bdevs_operational": 3, 00:18:13.125 "base_bdevs_list": [ 00:18:13.125 { 00:18:13.125 "name": null, 00:18:13.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.125 "is_configured": false, 00:18:13.125 "data_offset": 0, 00:18:13.125 "data_size": 63488 00:18:13.125 }, 00:18:13.125 { 00:18:13.125 "name": "BaseBdev2", 00:18:13.125 "uuid": "e44d84b9-1d67-5c24-817d-6f7a0974ecad", 00:18:13.125 "is_configured": true, 00:18:13.125 "data_offset": 2048, 00:18:13.125 "data_size": 63488 00:18:13.125 }, 00:18:13.125 { 00:18:13.125 "name": "BaseBdev3", 00:18:13.125 "uuid": "060591c7-8ec2-5a50-8642-7f67fd7da598", 00:18:13.125 "is_configured": true, 00:18:13.125 "data_offset": 2048, 00:18:13.125 "data_size": 63488 00:18:13.125 }, 00:18:13.125 { 00:18:13.125 "name": "BaseBdev4", 00:18:13.125 "uuid": "04e7b4bb-ad73-56db-954d-809ff2d5d076", 00:18:13.125 "is_configured": true, 00:18:13.125 "data_offset": 2048, 00:18:13.125 "data_size": 63488 00:18:13.125 } 00:18:13.125 ] 00:18:13.125 }' 00:18:13.125 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.125 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.692 [2024-12-06 13:13:00.539380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.692 [2024-12-06 13:13:00.539562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.692 [2024-12-06 13:13:00.543121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.692 [2024-12-06 13:13:00.543303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.692 [2024-12-06 13:13:00.543572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.692 [2024-12-06 13:13:00.543773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:13.692 { 00:18:13.692 "results": [ 00:18:13.692 { 00:18:13.692 "job": "raid_bdev1", 00:18:13.692 "core_mask": "0x1", 00:18:13.692 "workload": "randrw", 00:18:13.692 "percentage": 50, 00:18:13.692 "status": "finished", 00:18:13.692 "queue_depth": 1, 00:18:13.692 "io_size": 131072, 00:18:13.692 "runtime": 1.431144, 00:18:13.692 "iops": 7134.152817606055, 00:18:13.692 "mibps": 891.7691022007568, 00:18:13.692 "io_failed": 0, 00:18:13.692 "io_timeout": 0, 00:18:13.692 "avg_latency_us": 135.77059496037754, 00:18:13.692 "min_latency_us": 40.02909090909091, 00:18:13.692 "max_latency_us": 1980.9745454545455 00:18:13.692 } 00:18:13.692 ], 00:18:13.692 "core_count": 1 00:18:13.692 } 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75607 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75607 ']' 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75607 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75607 00:18:13.692 killing process with pid 75607 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75607' 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75607 00:18:13.692 13:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75607 00:18:13.692 [2024-12-06 13:13:00.582190] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:13.950 [2024-12-06 13:13:00.893156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.454 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1oxL2AcJFQ 00:18:15.454 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:18:15.454 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:18:15.454 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:18:15.454 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:18:15.454 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:15.454 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:15.455 13:13:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:15.455 00:18:15.455 real 0m4.991s 00:18:15.455 user 0m5.986s 00:18:15.455 sys 0m0.754s 00:18:15.455 ************************************ 00:18:15.455 END TEST raid_write_error_test 00:18:15.455 ************************************ 00:18:15.455 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.455 13:13:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.455 13:13:02 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:18:15.455 13:13:02 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:18:15.455 13:13:02 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:18:15.455 13:13:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:15.455 13:13:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.455 13:13:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.455 ************************************ 00:18:15.455 START TEST raid_rebuild_test 00:18:15.455 ************************************ 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75751 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75751 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75751 ']' 00:18:15.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.455 13:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.455 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:15.455 Zero copy mechanism will not be used. 00:18:15.455 [2024-12-06 13:13:02.296651] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:18:15.455 [2024-12-06 13:13:02.296900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75751 ] 00:18:15.714 [2024-12-06 13:13:02.490140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.714 [2024-12-06 13:13:02.667712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.972 [2024-12-06 13:13:02.903306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.972 [2024-12-06 13:13:02.903370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.540 BaseBdev1_malloc 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.540 [2024-12-06 13:13:03.385628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:16.540 [2024-12-06 13:13:03.385904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.540 [2024-12-06 13:13:03.385954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:16.540 [2024-12-06 13:13:03.385978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.540 [2024-12-06 13:13:03.389230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.540 BaseBdev1 00:18:16.540 [2024-12-06 13:13:03.389414] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.540 BaseBdev2_malloc 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.540 [2024-12-06 13:13:03.440299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:16.540 [2024-12-06 13:13:03.440427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.540 [2024-12-06 13:13:03.440463] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:16.540 [2024-12-06 13:13:03.440503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.540 [2024-12-06 13:13:03.443774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.540 [2024-12-06 13:13:03.443841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:16.540 BaseBdev2 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.540 spare_malloc 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.540 spare_delay 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.540 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.541 [2024-12-06 13:13:03.513638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.541 [2024-12-06 13:13:03.513739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.541 [2024-12-06 13:13:03.513772] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:16.541 [2024-12-06 13:13:03.513806] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.541 [2024-12-06 13:13:03.516935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.541 [2024-12-06 13:13:03.516997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.541 spare 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.541 [2024-12-06 13:13:03.521870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.541 [2024-12-06 13:13:03.524669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.541 [2024-12-06 13:13:03.524800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:16.541 [2024-12-06 13:13:03.524840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:16.541 [2024-12-06 13:13:03.525175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:16.541 [2024-12-06 13:13:03.525445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:16.541 [2024-12-06 13:13:03.525482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:16.541 [2024-12-06 13:13:03.525747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.541 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.799 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.799 "name": "raid_bdev1", 00:18:16.799 "uuid": "c6d40c13-bdd7-4311-8aed-10720b4e5be5", 00:18:16.799 "strip_size_kb": 0, 00:18:16.799 "state": "online", 00:18:16.799 "raid_level": "raid1", 00:18:16.799 "superblock": false, 00:18:16.799 "num_base_bdevs": 2, 00:18:16.799 "num_base_bdevs_discovered": 2, 00:18:16.799 "num_base_bdevs_operational": 2, 00:18:16.799 "base_bdevs_list": [ 00:18:16.799 { 00:18:16.799 "name": "BaseBdev1", 00:18:16.799 "uuid": "860c0a19-356a-52cc-a921-795c58408c0f", 00:18:16.799 "is_configured": true, 00:18:16.799 "data_offset": 0, 00:18:16.799 "data_size": 65536 00:18:16.799 }, 00:18:16.799 { 00:18:16.799 "name": "BaseBdev2", 00:18:16.799 "uuid": "6db545a1-6a0e-541d-9c90-cb1b8f1fc9a2", 00:18:16.799 "is_configured": true, 00:18:16.799 "data_offset": 0, 00:18:16.799 "data_size": 65536 00:18:16.799 } 00:18:16.799 ] 00:18:16.799 }' 00:18:16.799 13:13:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.799 13:13:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.058 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:17.058 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.058 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.058 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.058 [2024-12-06 13:13:04.058521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:17.317 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:17.578 [2024-12-06 13:13:04.490359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:17.578 /dev/nbd0 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.578 1+0 records in 00:18:17.578 1+0 records out 00:18:17.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468989 s, 8.7 MB/s 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:17.578 13:13:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:24.140 65536+0 records in 00:18:24.140 65536+0 records out 00:18:24.140 33554432 bytes (34 MB, 32 MiB) copied, 6.46283 s, 5.2 MB/s 00:18:24.140 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:24.140 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.140 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:24.140 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:24.140 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:24.140 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.140 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:24.400 [2024-12-06 13:13:11.299308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.400 [2024-12-06 13:13:11.332500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.400 "name": "raid_bdev1", 00:18:24.400 "uuid": "c6d40c13-bdd7-4311-8aed-10720b4e5be5", 00:18:24.400 "strip_size_kb": 0, 00:18:24.400 "state": "online", 00:18:24.400 "raid_level": "raid1", 00:18:24.400 "superblock": false, 00:18:24.400 "num_base_bdevs": 2, 00:18:24.400 "num_base_bdevs_discovered": 1, 00:18:24.400 "num_base_bdevs_operational": 1, 00:18:24.400 "base_bdevs_list": [ 00:18:24.400 { 00:18:24.400 "name": null, 00:18:24.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.400 "is_configured": false, 00:18:24.400 "data_offset": 0, 00:18:24.400 "data_size": 65536 00:18:24.400 }, 00:18:24.400 { 00:18:24.400 "name": "BaseBdev2", 00:18:24.400 "uuid": "6db545a1-6a0e-541d-9c90-cb1b8f1fc9a2", 00:18:24.400 "is_configured": true, 00:18:24.400 "data_offset": 0, 00:18:24.400 "data_size": 65536 00:18:24.400 } 00:18:24.400 ] 00:18:24.400 }' 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.400 13:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.968 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:24.968 13:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.968 13:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.968 [2024-12-06 13:13:11.856653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.968 [2024-12-06 13:13:11.874702] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:18:24.968 13:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.968 13:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:24.968 [2024-12-06 13:13:11.877404] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:25.990 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.990 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.990 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.990 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.990 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.990 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.990 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.990 13:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.991 13:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.991 13:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.991 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.991 "name": "raid_bdev1", 00:18:25.991 "uuid": "c6d40c13-bdd7-4311-8aed-10720b4e5be5", 00:18:25.991 "strip_size_kb": 0, 00:18:25.991 "state": "online", 00:18:25.991 "raid_level": "raid1", 00:18:25.991 "superblock": false, 00:18:25.991 "num_base_bdevs": 2, 00:18:25.991 "num_base_bdevs_discovered": 2, 00:18:25.991 "num_base_bdevs_operational": 2, 00:18:25.991 "process": { 00:18:25.991 "type": "rebuild", 00:18:25.991 "target": "spare", 00:18:25.991 "progress": { 00:18:25.991 "blocks": 20480, 00:18:25.991 "percent": 31 00:18:25.991 } 00:18:25.991 }, 00:18:25.991 "base_bdevs_list": [ 00:18:25.991 { 00:18:25.991 "name": "spare", 00:18:25.991 "uuid": "5b00a726-6ccc-55f1-b0ea-40cd6f1e7ea1", 00:18:25.991 "is_configured": true, 00:18:25.991 "data_offset": 0, 00:18:25.991 "data_size": 65536 00:18:25.991 }, 00:18:25.991 { 00:18:25.991 "name": "BaseBdev2", 00:18:25.991 "uuid": "6db545a1-6a0e-541d-9c90-cb1b8f1fc9a2", 00:18:25.991 "is_configured": true, 00:18:25.991 "data_offset": 0, 00:18:25.991 "data_size": 65536 00:18:25.991 } 00:18:25.991 ] 00:18:25.991 }' 00:18:25.991 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.991 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.991 13:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.249 [2024-12-06 13:13:13.043294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.249 [2024-12-06 13:13:13.089088] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:26.249 [2024-12-06 13:13:13.089247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.249 [2024-12-06 13:13:13.089274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.249 [2024-12-06 13:13:13.089290] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.249 "name": "raid_bdev1", 00:18:26.249 "uuid": "c6d40c13-bdd7-4311-8aed-10720b4e5be5", 00:18:26.249 "strip_size_kb": 0, 00:18:26.249 "state": "online", 00:18:26.249 "raid_level": "raid1", 00:18:26.249 "superblock": false, 00:18:26.249 "num_base_bdevs": 2, 00:18:26.249 "num_base_bdevs_discovered": 1, 00:18:26.249 "num_base_bdevs_operational": 1, 00:18:26.249 "base_bdevs_list": [ 00:18:26.249 { 00:18:26.249 "name": null, 00:18:26.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.249 "is_configured": false, 00:18:26.249 "data_offset": 0, 00:18:26.249 "data_size": 65536 00:18:26.249 }, 00:18:26.249 { 00:18:26.249 "name": "BaseBdev2", 00:18:26.249 "uuid": "6db545a1-6a0e-541d-9c90-cb1b8f1fc9a2", 00:18:26.249 "is_configured": true, 00:18:26.249 "data_offset": 0, 00:18:26.249 "data_size": 65536 00:18:26.249 } 00:18:26.249 ] 00:18:26.249 }' 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.249 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.816 "name": "raid_bdev1", 00:18:26.816 "uuid": "c6d40c13-bdd7-4311-8aed-10720b4e5be5", 00:18:26.816 "strip_size_kb": 0, 00:18:26.816 "state": "online", 00:18:26.816 "raid_level": "raid1", 00:18:26.816 "superblock": false, 00:18:26.816 "num_base_bdevs": 2, 00:18:26.816 "num_base_bdevs_discovered": 1, 00:18:26.816 "num_base_bdevs_operational": 1, 00:18:26.816 "base_bdevs_list": [ 00:18:26.816 { 00:18:26.816 "name": null, 00:18:26.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.816 "is_configured": false, 00:18:26.816 "data_offset": 0, 00:18:26.816 "data_size": 65536 00:18:26.816 }, 00:18:26.816 { 00:18:26.816 "name": "BaseBdev2", 00:18:26.816 "uuid": "6db545a1-6a0e-541d-9c90-cb1b8f1fc9a2", 00:18:26.816 "is_configured": true, 00:18:26.816 "data_offset": 0, 00:18:26.816 "data_size": 65536 00:18:26.816 } 00:18:26.816 ] 00:18:26.816 }' 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.816 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.817 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.817 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.817 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:26.817 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.817 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.817 [2024-12-06 13:13:13.800450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.817 [2024-12-06 13:13:13.817643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:18:26.817 13:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.817 13:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:26.817 [2024-12-06 13:13:13.820610] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.189 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.189 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.189 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.189 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.189 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.190 "name": "raid_bdev1", 00:18:28.190 "uuid": "c6d40c13-bdd7-4311-8aed-10720b4e5be5", 00:18:28.190 "strip_size_kb": 0, 00:18:28.190 "state": "online", 00:18:28.190 "raid_level": "raid1", 00:18:28.190 "superblock": false, 00:18:28.190 "num_base_bdevs": 2, 00:18:28.190 "num_base_bdevs_discovered": 2, 00:18:28.190 "num_base_bdevs_operational": 2, 00:18:28.190 "process": { 00:18:28.190 "type": "rebuild", 00:18:28.190 "target": "spare", 00:18:28.190 "progress": { 00:18:28.190 "blocks": 20480, 00:18:28.190 "percent": 31 00:18:28.190 } 00:18:28.190 }, 00:18:28.190 "base_bdevs_list": [ 00:18:28.190 { 00:18:28.190 "name": "spare", 00:18:28.190 "uuid": "5b00a726-6ccc-55f1-b0ea-40cd6f1e7ea1", 00:18:28.190 "is_configured": true, 00:18:28.190 "data_offset": 0, 00:18:28.190 "data_size": 65536 00:18:28.190 }, 00:18:28.190 { 00:18:28.190 "name": "BaseBdev2", 00:18:28.190 "uuid": "6db545a1-6a0e-541d-9c90-cb1b8f1fc9a2", 00:18:28.190 "is_configured": true, 00:18:28.190 "data_offset": 0, 00:18:28.190 "data_size": 65536 00:18:28.190 } 00:18:28.190 ] 00:18:28.190 }' 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=405 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.190 13:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.190 13:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.190 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.190 "name": "raid_bdev1", 00:18:28.190 "uuid": "c6d40c13-bdd7-4311-8aed-10720b4e5be5", 00:18:28.190 "strip_size_kb": 0, 00:18:28.190 "state": "online", 00:18:28.190 "raid_level": "raid1", 00:18:28.190 "superblock": false, 00:18:28.190 "num_base_bdevs": 2, 00:18:28.190 "num_base_bdevs_discovered": 2, 00:18:28.190 "num_base_bdevs_operational": 2, 00:18:28.190 "process": { 00:18:28.190 "type": "rebuild", 00:18:28.190 "target": "spare", 00:18:28.190 "progress": { 00:18:28.190 "blocks": 22528, 00:18:28.190 "percent": 34 00:18:28.190 } 00:18:28.190 }, 00:18:28.190 "base_bdevs_list": [ 00:18:28.190 { 00:18:28.190 "name": "spare", 00:18:28.190 "uuid": "5b00a726-6ccc-55f1-b0ea-40cd6f1e7ea1", 00:18:28.190 "is_configured": true, 00:18:28.190 "data_offset": 0, 00:18:28.190 "data_size": 65536 00:18:28.190 }, 00:18:28.190 { 00:18:28.190 "name": "BaseBdev2", 00:18:28.190 "uuid": "6db545a1-6a0e-541d-9c90-cb1b8f1fc9a2", 00:18:28.190 "is_configured": true, 00:18:28.190 "data_offset": 0, 00:18:28.190 "data_size": 65536 00:18:28.190 } 00:18:28.190 ] 00:18:28.190 }' 00:18:28.190 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.190 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.190 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.190 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.190 13:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.561 "name": "raid_bdev1", 00:18:29.561 "uuid": "c6d40c13-bdd7-4311-8aed-10720b4e5be5", 00:18:29.561 "strip_size_kb": 0, 00:18:29.561 "state": "online", 00:18:29.561 "raid_level": "raid1", 00:18:29.561 "superblock": false, 00:18:29.561 "num_base_bdevs": 2, 00:18:29.561 "num_base_bdevs_discovered": 2, 00:18:29.561 "num_base_bdevs_operational": 2, 00:18:29.561 "process": { 00:18:29.561 "type": "rebuild", 00:18:29.561 "target": "spare", 00:18:29.561 "progress": { 00:18:29.561 "blocks": 47104, 00:18:29.561 "percent": 71 00:18:29.561 } 00:18:29.561 }, 00:18:29.561 "base_bdevs_list": [ 00:18:29.561 { 00:18:29.561 "name": "spare", 00:18:29.561 "uuid": "5b00a726-6ccc-55f1-b0ea-40cd6f1e7ea1", 00:18:29.561 "is_configured": true, 00:18:29.561 "data_offset": 0, 00:18:29.561 "data_size": 65536 00:18:29.561 }, 00:18:29.561 { 00:18:29.561 "name": "BaseBdev2", 00:18:29.561 "uuid": "6db545a1-6a0e-541d-9c90-cb1b8f1fc9a2", 00:18:29.561 "is_configured": true, 00:18:29.561 "data_offset": 0, 00:18:29.561 "data_size": 65536 00:18:29.561 } 00:18:29.561 ] 00:18:29.561 }' 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.561 13:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.127 [2024-12-06 13:13:17.050979] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:30.127 [2024-12-06 13:13:17.051145] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:30.127 [2024-12-06 13:13:17.051245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.386 "name": "raid_bdev1", 00:18:30.386 "uuid": "c6d40c13-bdd7-4311-8aed-10720b4e5be5", 00:18:30.386 "strip_size_kb": 0, 00:18:30.386 "state": "online", 00:18:30.386 "raid_level": "raid1", 00:18:30.386 "superblock": false, 00:18:30.386 "num_base_bdevs": 2, 00:18:30.386 "num_base_bdevs_discovered": 2, 00:18:30.386 "num_base_bdevs_operational": 2, 00:18:30.386 "base_bdevs_list": [ 00:18:30.386 { 00:18:30.386 "name": "spare", 00:18:30.386 "uuid": "5b00a726-6ccc-55f1-b0ea-40cd6f1e7ea1", 00:18:30.386 "is_configured": true, 00:18:30.386 "data_offset": 0, 00:18:30.386 "data_size": 65536 00:18:30.386 }, 00:18:30.386 { 00:18:30.386 "name": "BaseBdev2", 00:18:30.386 "uuid": "6db545a1-6a0e-541d-9c90-cb1b8f1fc9a2", 00:18:30.386 "is_configured": true, 00:18:30.386 "data_offset": 0, 00:18:30.386 "data_size": 65536 00:18:30.386 } 00:18:30.386 ] 00:18:30.386 }' 00:18:30.386 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.650 "name": "raid_bdev1", 00:18:30.650 "uuid": "c6d40c13-bdd7-4311-8aed-10720b4e5be5", 00:18:30.650 "strip_size_kb": 0, 00:18:30.650 "state": "online", 00:18:30.650 "raid_level": "raid1", 00:18:30.650 "superblock": false, 00:18:30.650 "num_base_bdevs": 2, 00:18:30.650 "num_base_bdevs_discovered": 2, 00:18:30.650 "num_base_bdevs_operational": 2, 00:18:30.650 "base_bdevs_list": [ 00:18:30.650 { 00:18:30.650 "name": "spare", 00:18:30.650 "uuid": "5b00a726-6ccc-55f1-b0ea-40cd6f1e7ea1", 00:18:30.650 "is_configured": true, 00:18:30.650 "data_offset": 0, 00:18:30.650 "data_size": 65536 00:18:30.650 }, 00:18:30.650 { 00:18:30.650 "name": "BaseBdev2", 00:18:30.650 "uuid": "6db545a1-6a0e-541d-9c90-cb1b8f1fc9a2", 00:18:30.650 "is_configured": true, 00:18:30.650 "data_offset": 0, 00:18:30.650 "data_size": 65536 00:18:30.650 } 00:18:30.650 ] 00:18:30.650 }' 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.650 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.651 13:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.651 13:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.651 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.916 13:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.916 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.916 "name": "raid_bdev1", 00:18:30.916 "uuid": "c6d40c13-bdd7-4311-8aed-10720b4e5be5", 00:18:30.916 "strip_size_kb": 0, 00:18:30.916 "state": "online", 00:18:30.916 "raid_level": "raid1", 00:18:30.916 "superblock": false, 00:18:30.916 "num_base_bdevs": 2, 00:18:30.916 "num_base_bdevs_discovered": 2, 00:18:30.916 "num_base_bdevs_operational": 2, 00:18:30.916 "base_bdevs_list": [ 00:18:30.916 { 00:18:30.916 "name": "spare", 00:18:30.916 "uuid": "5b00a726-6ccc-55f1-b0ea-40cd6f1e7ea1", 00:18:30.916 "is_configured": true, 00:18:30.916 "data_offset": 0, 00:18:30.916 "data_size": 65536 00:18:30.916 }, 00:18:30.916 { 00:18:30.916 "name": "BaseBdev2", 00:18:30.916 "uuid": "6db545a1-6a0e-541d-9c90-cb1b8f1fc9a2", 00:18:30.916 "is_configured": true, 00:18:30.916 "data_offset": 0, 00:18:30.916 "data_size": 65536 00:18:30.916 } 00:18:30.916 ] 00:18:30.916 }' 00:18:30.916 13:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.916 13:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.483 [2024-12-06 13:13:18.198012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.483 [2024-12-06 13:13:18.198226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.483 [2024-12-06 13:13:18.198508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.483 [2024-12-06 13:13:18.198627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.483 [2024-12-06 13:13:18.198648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:31.483 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:31.743 /dev/nbd0 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:31.743 1+0 records in 00:18:31.743 1+0 records out 00:18:31.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266201 s, 15.4 MB/s 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:31.743 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:32.002 /dev/nbd1 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.002 1+0 records in 00:18:32.002 1+0 records out 00:18:32.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038455 s, 10.7 MB/s 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:32.002 13:13:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:32.261 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:32.261 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:32.261 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:32.261 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:32.261 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:32.261 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:32.261 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:32.520 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:32.520 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:32.520 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:32.520 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:32.520 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:32.520 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:32.520 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:32.520 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:32.520 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:32.520 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75751 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75751 ']' 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75751 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75751 00:18:32.779 killing process with pid 75751 00:18:32.779 Received shutdown signal, test time was about 60.000000 seconds 00:18:32.779 00:18:32.779 Latency(us) 00:18:32.779 [2024-12-06T13:13:19.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.779 [2024-12-06T13:13:19.795Z] =================================================================================================================== 00:18:32.779 [2024-12-06T13:13:19.795Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75751' 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75751 00:18:32.779 [2024-12-06 13:13:19.778728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.779 13:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75751 00:18:33.037 [2024-12-06 13:13:20.049051] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:34.419 00:18:34.419 real 0m19.002s 00:18:34.419 user 0m21.516s 00:18:34.419 sys 0m3.520s 00:18:34.419 ************************************ 00:18:34.419 END TEST raid_rebuild_test 00:18:34.419 ************************************ 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.419 13:13:21 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:18:34.419 13:13:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:34.419 13:13:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.419 13:13:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:34.419 ************************************ 00:18:34.419 START TEST raid_rebuild_test_sb 00:18:34.419 ************************************ 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76202 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76202 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76202 ']' 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.419 13:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.419 [2024-12-06 13:13:21.348838] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:18:34.419 [2024-12-06 13:13:21.349273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76202 ] 00:18:34.419 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:34.419 Zero copy mechanism will not be used. 00:18:34.678 [2024-12-06 13:13:21.528163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.678 [2024-12-06 13:13:21.674030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.936 [2024-12-06 13:13:21.901529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.936 [2024-12-06 13:13:21.901765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.513 BaseBdev1_malloc 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.513 [2024-12-06 13:13:22.423102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:35.513 [2024-12-06 13:13:22.423233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.513 [2024-12-06 13:13:22.423273] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:35.513 [2024-12-06 13:13:22.423292] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.513 [2024-12-06 13:13:22.426608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.513 [2024-12-06 13:13:22.426663] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:35.513 BaseBdev1 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.513 BaseBdev2_malloc 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.513 [2024-12-06 13:13:22.483133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:35.513 [2024-12-06 13:13:22.483235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.513 [2024-12-06 13:13:22.483265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:35.513 [2024-12-06 13:13:22.483283] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.513 [2024-12-06 13:13:22.486330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.513 [2024-12-06 13:13:22.486534] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:35.513 BaseBdev2 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.513 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.773 spare_malloc 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.773 spare_delay 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.773 [2024-12-06 13:13:22.560867] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:35.773 [2024-12-06 13:13:22.560967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.773 [2024-12-06 13:13:22.560999] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:35.773 [2024-12-06 13:13:22.561016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.773 [2024-12-06 13:13:22.564097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.773 [2024-12-06 13:13:22.564163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:35.773 spare 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.773 [2024-12-06 13:13:22.573071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.773 [2024-12-06 13:13:22.576096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.773 [2024-12-06 13:13:22.576535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:35.773 [2024-12-06 13:13:22.576675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:35.773 [2024-12-06 13:13:22.577060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:35.773 [2024-12-06 13:13:22.577452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:35.773 [2024-12-06 13:13:22.577648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:35.773 [2024-12-06 13:13:22.578047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.773 "name": "raid_bdev1", 00:18:35.773 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:35.773 "strip_size_kb": 0, 00:18:35.773 "state": "online", 00:18:35.773 "raid_level": "raid1", 00:18:35.773 "superblock": true, 00:18:35.773 "num_base_bdevs": 2, 00:18:35.773 "num_base_bdevs_discovered": 2, 00:18:35.773 "num_base_bdevs_operational": 2, 00:18:35.773 "base_bdevs_list": [ 00:18:35.773 { 00:18:35.773 "name": "BaseBdev1", 00:18:35.773 "uuid": "a21177b2-c9e2-565e-b4c5-53522bf42692", 00:18:35.773 "is_configured": true, 00:18:35.773 "data_offset": 2048, 00:18:35.773 "data_size": 63488 00:18:35.773 }, 00:18:35.773 { 00:18:35.773 "name": "BaseBdev2", 00:18:35.773 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:35.773 "is_configured": true, 00:18:35.773 "data_offset": 2048, 00:18:35.773 "data_size": 63488 00:18:35.773 } 00:18:35.773 ] 00:18:35.773 }' 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.773 13:13:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.342 [2024-12-06 13:13:23.098730] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:36.342 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:36.601 [2024-12-06 13:13:23.510446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:36.601 /dev/nbd0 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.601 1+0 records in 00:18:36.601 1+0 records out 00:18:36.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439525 s, 9.3 MB/s 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:36.601 13:13:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:43.233 63488+0 records in 00:18:43.233 63488+0 records out 00:18:43.233 32505856 bytes (33 MB, 31 MiB) copied, 6.26595 s, 5.2 MB/s 00:18:43.233 13:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:43.233 13:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:43.233 13:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:43.233 13:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:43.233 13:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:43.233 13:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:43.233 13:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:43.233 [2024-12-06 13:13:30.138961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.233 [2024-12-06 13:13:30.177949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.233 "name": "raid_bdev1", 00:18:43.233 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:43.233 "strip_size_kb": 0, 00:18:43.233 "state": "online", 00:18:43.233 "raid_level": "raid1", 00:18:43.233 "superblock": true, 00:18:43.233 "num_base_bdevs": 2, 00:18:43.233 "num_base_bdevs_discovered": 1, 00:18:43.233 "num_base_bdevs_operational": 1, 00:18:43.233 "base_bdevs_list": [ 00:18:43.233 { 00:18:43.233 "name": null, 00:18:43.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.233 "is_configured": false, 00:18:43.233 "data_offset": 0, 00:18:43.233 "data_size": 63488 00:18:43.233 }, 00:18:43.233 { 00:18:43.233 "name": "BaseBdev2", 00:18:43.233 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:43.233 "is_configured": true, 00:18:43.233 "data_offset": 2048, 00:18:43.233 "data_size": 63488 00:18:43.233 } 00:18:43.233 ] 00:18:43.233 }' 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.233 13:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.798 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:43.798 13:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.798 13:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.798 [2024-12-06 13:13:30.690165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.798 [2024-12-06 13:13:30.708126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:18:43.798 13:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.798 13:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:43.798 [2024-12-06 13:13:30.710813] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:44.731 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.731 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.731 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.731 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.731 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.731 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.731 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.731 13:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.731 13:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.731 13:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.989 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.989 "name": "raid_bdev1", 00:18:44.989 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:44.989 "strip_size_kb": 0, 00:18:44.989 "state": "online", 00:18:44.989 "raid_level": "raid1", 00:18:44.989 "superblock": true, 00:18:44.989 "num_base_bdevs": 2, 00:18:44.989 "num_base_bdevs_discovered": 2, 00:18:44.989 "num_base_bdevs_operational": 2, 00:18:44.989 "process": { 00:18:44.989 "type": "rebuild", 00:18:44.989 "target": "spare", 00:18:44.989 "progress": { 00:18:44.989 "blocks": 20480, 00:18:44.989 "percent": 32 00:18:44.989 } 00:18:44.989 }, 00:18:44.989 "base_bdevs_list": [ 00:18:44.989 { 00:18:44.989 "name": "spare", 00:18:44.989 "uuid": "1a03d954-efbc-50f1-93ff-e1ecbd56a416", 00:18:44.989 "is_configured": true, 00:18:44.989 "data_offset": 2048, 00:18:44.989 "data_size": 63488 00:18:44.989 }, 00:18:44.989 { 00:18:44.989 "name": "BaseBdev2", 00:18:44.989 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:44.989 "is_configured": true, 00:18:44.989 "data_offset": 2048, 00:18:44.989 "data_size": 63488 00:18:44.989 } 00:18:44.989 ] 00:18:44.989 }' 00:18:44.989 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.989 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.989 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.989 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.989 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:44.989 13:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.989 13:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.989 [2024-12-06 13:13:31.876389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.989 [2024-12-06 13:13:31.922121] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:44.989 [2024-12-06 13:13:31.922212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.989 [2024-12-06 13:13:31.922237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.989 [2024-12-06 13:13:31.922256] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.990 13:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.267 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.267 "name": "raid_bdev1", 00:18:45.267 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:45.267 "strip_size_kb": 0, 00:18:45.267 "state": "online", 00:18:45.267 "raid_level": "raid1", 00:18:45.267 "superblock": true, 00:18:45.267 "num_base_bdevs": 2, 00:18:45.267 "num_base_bdevs_discovered": 1, 00:18:45.267 "num_base_bdevs_operational": 1, 00:18:45.267 "base_bdevs_list": [ 00:18:45.267 { 00:18:45.267 "name": null, 00:18:45.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.267 "is_configured": false, 00:18:45.267 "data_offset": 0, 00:18:45.267 "data_size": 63488 00:18:45.267 }, 00:18:45.267 { 00:18:45.267 "name": "BaseBdev2", 00:18:45.267 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:45.267 "is_configured": true, 00:18:45.267 "data_offset": 2048, 00:18:45.267 "data_size": 63488 00:18:45.267 } 00:18:45.267 ] 00:18:45.267 }' 00:18:45.267 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.267 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.526 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.526 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.526 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.526 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.526 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.526 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.526 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.526 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.526 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.526 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.784 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.784 "name": "raid_bdev1", 00:18:45.784 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:45.784 "strip_size_kb": 0, 00:18:45.784 "state": "online", 00:18:45.784 "raid_level": "raid1", 00:18:45.784 "superblock": true, 00:18:45.784 "num_base_bdevs": 2, 00:18:45.784 "num_base_bdevs_discovered": 1, 00:18:45.784 "num_base_bdevs_operational": 1, 00:18:45.784 "base_bdevs_list": [ 00:18:45.784 { 00:18:45.784 "name": null, 00:18:45.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.784 "is_configured": false, 00:18:45.784 "data_offset": 0, 00:18:45.784 "data_size": 63488 00:18:45.784 }, 00:18:45.784 { 00:18:45.784 "name": "BaseBdev2", 00:18:45.784 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:45.784 "is_configured": true, 00:18:45.784 "data_offset": 2048, 00:18:45.784 "data_size": 63488 00:18:45.784 } 00:18:45.784 ] 00:18:45.784 }' 00:18:45.784 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.784 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.784 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.784 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:45.784 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:45.784 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.784 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.784 [2024-12-06 13:13:32.648068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.784 [2024-12-06 13:13:32.665201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:18:45.784 13:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.784 13:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:45.785 [2024-12-06 13:13:32.668030] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.722 "name": "raid_bdev1", 00:18:46.722 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:46.722 "strip_size_kb": 0, 00:18:46.722 "state": "online", 00:18:46.722 "raid_level": "raid1", 00:18:46.722 "superblock": true, 00:18:46.722 "num_base_bdevs": 2, 00:18:46.722 "num_base_bdevs_discovered": 2, 00:18:46.722 "num_base_bdevs_operational": 2, 00:18:46.722 "process": { 00:18:46.722 "type": "rebuild", 00:18:46.722 "target": "spare", 00:18:46.722 "progress": { 00:18:46.722 "blocks": 18432, 00:18:46.722 "percent": 29 00:18:46.722 } 00:18:46.722 }, 00:18:46.722 "base_bdevs_list": [ 00:18:46.722 { 00:18:46.722 "name": "spare", 00:18:46.722 "uuid": "1a03d954-efbc-50f1-93ff-e1ecbd56a416", 00:18:46.722 "is_configured": true, 00:18:46.722 "data_offset": 2048, 00:18:46.722 "data_size": 63488 00:18:46.722 }, 00:18:46.722 { 00:18:46.722 "name": "BaseBdev2", 00:18:46.722 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:46.722 "is_configured": true, 00:18:46.722 "data_offset": 2048, 00:18:46.722 "data_size": 63488 00:18:46.722 } 00:18:46.722 ] 00:18:46.722 }' 00:18:46.722 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:46.981 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=424 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.981 "name": "raid_bdev1", 00:18:46.981 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:46.981 "strip_size_kb": 0, 00:18:46.981 "state": "online", 00:18:46.981 "raid_level": "raid1", 00:18:46.981 "superblock": true, 00:18:46.981 "num_base_bdevs": 2, 00:18:46.981 "num_base_bdevs_discovered": 2, 00:18:46.981 "num_base_bdevs_operational": 2, 00:18:46.981 "process": { 00:18:46.981 "type": "rebuild", 00:18:46.981 "target": "spare", 00:18:46.981 "progress": { 00:18:46.981 "blocks": 22528, 00:18:46.981 "percent": 35 00:18:46.981 } 00:18:46.981 }, 00:18:46.981 "base_bdevs_list": [ 00:18:46.981 { 00:18:46.981 "name": "spare", 00:18:46.981 "uuid": "1a03d954-efbc-50f1-93ff-e1ecbd56a416", 00:18:46.981 "is_configured": true, 00:18:46.981 "data_offset": 2048, 00:18:46.981 "data_size": 63488 00:18:46.981 }, 00:18:46.981 { 00:18:46.981 "name": "BaseBdev2", 00:18:46.981 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:46.981 "is_configured": true, 00:18:46.981 "data_offset": 2048, 00:18:46.981 "data_size": 63488 00:18:46.981 } 00:18:46.981 ] 00:18:46.981 }' 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.981 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.241 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.241 13:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:48.179 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:48.179 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.179 13:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.179 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.179 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.179 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.179 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.179 13:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.179 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.179 13:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.179 13:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.179 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.179 "name": "raid_bdev1", 00:18:48.179 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:48.179 "strip_size_kb": 0, 00:18:48.179 "state": "online", 00:18:48.179 "raid_level": "raid1", 00:18:48.179 "superblock": true, 00:18:48.179 "num_base_bdevs": 2, 00:18:48.179 "num_base_bdevs_discovered": 2, 00:18:48.179 "num_base_bdevs_operational": 2, 00:18:48.179 "process": { 00:18:48.179 "type": "rebuild", 00:18:48.179 "target": "spare", 00:18:48.179 "progress": { 00:18:48.179 "blocks": 47104, 00:18:48.179 "percent": 74 00:18:48.179 } 00:18:48.179 }, 00:18:48.179 "base_bdevs_list": [ 00:18:48.179 { 00:18:48.179 "name": "spare", 00:18:48.179 "uuid": "1a03d954-efbc-50f1-93ff-e1ecbd56a416", 00:18:48.179 "is_configured": true, 00:18:48.179 "data_offset": 2048, 00:18:48.180 "data_size": 63488 00:18:48.180 }, 00:18:48.180 { 00:18:48.180 "name": "BaseBdev2", 00:18:48.180 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:48.180 "is_configured": true, 00:18:48.180 "data_offset": 2048, 00:18:48.180 "data_size": 63488 00:18:48.180 } 00:18:48.180 ] 00:18:48.180 }' 00:18:48.180 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.180 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.180 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.180 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.180 13:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:49.115 [2024-12-06 13:13:35.799538] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:49.115 [2024-12-06 13:13:35.799736] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:49.115 [2024-12-06 13:13:35.799926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.374 "name": "raid_bdev1", 00:18:49.374 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:49.374 "strip_size_kb": 0, 00:18:49.374 "state": "online", 00:18:49.374 "raid_level": "raid1", 00:18:49.374 "superblock": true, 00:18:49.374 "num_base_bdevs": 2, 00:18:49.374 "num_base_bdevs_discovered": 2, 00:18:49.374 "num_base_bdevs_operational": 2, 00:18:49.374 "base_bdevs_list": [ 00:18:49.374 { 00:18:49.374 "name": "spare", 00:18:49.374 "uuid": "1a03d954-efbc-50f1-93ff-e1ecbd56a416", 00:18:49.374 "is_configured": true, 00:18:49.374 "data_offset": 2048, 00:18:49.374 "data_size": 63488 00:18:49.374 }, 00:18:49.374 { 00:18:49.374 "name": "BaseBdev2", 00:18:49.374 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:49.374 "is_configured": true, 00:18:49.374 "data_offset": 2048, 00:18:49.374 "data_size": 63488 00:18:49.374 } 00:18:49.374 ] 00:18:49.374 }' 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.374 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.374 "name": "raid_bdev1", 00:18:49.374 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:49.374 "strip_size_kb": 0, 00:18:49.374 "state": "online", 00:18:49.374 "raid_level": "raid1", 00:18:49.375 "superblock": true, 00:18:49.375 "num_base_bdevs": 2, 00:18:49.375 "num_base_bdevs_discovered": 2, 00:18:49.375 "num_base_bdevs_operational": 2, 00:18:49.375 "base_bdevs_list": [ 00:18:49.375 { 00:18:49.375 "name": "spare", 00:18:49.375 "uuid": "1a03d954-efbc-50f1-93ff-e1ecbd56a416", 00:18:49.375 "is_configured": true, 00:18:49.375 "data_offset": 2048, 00:18:49.375 "data_size": 63488 00:18:49.375 }, 00:18:49.375 { 00:18:49.375 "name": "BaseBdev2", 00:18:49.375 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:49.375 "is_configured": true, 00:18:49.375 "data_offset": 2048, 00:18:49.375 "data_size": 63488 00:18:49.375 } 00:18:49.375 ] 00:18:49.375 }' 00:18:49.375 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.633 "name": "raid_bdev1", 00:18:49.633 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:49.633 "strip_size_kb": 0, 00:18:49.633 "state": "online", 00:18:49.633 "raid_level": "raid1", 00:18:49.633 "superblock": true, 00:18:49.633 "num_base_bdevs": 2, 00:18:49.633 "num_base_bdevs_discovered": 2, 00:18:49.633 "num_base_bdevs_operational": 2, 00:18:49.633 "base_bdevs_list": [ 00:18:49.633 { 00:18:49.633 "name": "spare", 00:18:49.633 "uuid": "1a03d954-efbc-50f1-93ff-e1ecbd56a416", 00:18:49.633 "is_configured": true, 00:18:49.633 "data_offset": 2048, 00:18:49.633 "data_size": 63488 00:18:49.633 }, 00:18:49.633 { 00:18:49.633 "name": "BaseBdev2", 00:18:49.633 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:49.633 "is_configured": true, 00:18:49.633 "data_offset": 2048, 00:18:49.633 "data_size": 63488 00:18:49.633 } 00:18:49.633 ] 00:18:49.633 }' 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.633 13:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.201 [2024-12-06 13:13:37.020575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:50.201 [2024-12-06 13:13:37.020620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:50.201 [2024-12-06 13:13:37.020738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.201 [2024-12-06 13:13:37.020904] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.201 [2024-12-06 13:13:37.020932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:50.201 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:50.460 /dev/nbd0 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:50.460 1+0 records in 00:18:50.460 1+0 records out 00:18:50.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396414 s, 10.3 MB/s 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:50.460 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:51.029 /dev/nbd1 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:51.029 1+0 records in 00:18:51.029 1+0 records out 00:18:51.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511503 s, 8.0 MB/s 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:51.029 13:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:51.029 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:51.029 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:51.029 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:51.029 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:51.029 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:51.029 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:51.029 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:51.596 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:51.596 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:51.596 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:51.596 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:51.596 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:51.596 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:51.596 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:51.596 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:51.596 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:51.596 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.855 [2024-12-06 13:13:38.682232] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:51.855 [2024-12-06 13:13:38.682330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.855 [2024-12-06 13:13:38.682373] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:51.855 [2024-12-06 13:13:38.682388] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.855 [2024-12-06 13:13:38.685847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.855 [2024-12-06 13:13:38.685935] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:51.855 [2024-12-06 13:13:38.686117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:51.855 [2024-12-06 13:13:38.686203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.855 [2024-12-06 13:13:38.686430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.855 spare 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.855 [2024-12-06 13:13:38.786658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:51.855 [2024-12-06 13:13:38.786800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:51.855 [2024-12-06 13:13:38.787343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:18:51.855 [2024-12-06 13:13:38.787741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:51.855 [2024-12-06 13:13:38.787771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:51.855 [2024-12-06 13:13:38.788058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.855 "name": "raid_bdev1", 00:18:51.855 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:51.855 "strip_size_kb": 0, 00:18:51.855 "state": "online", 00:18:51.855 "raid_level": "raid1", 00:18:51.855 "superblock": true, 00:18:51.855 "num_base_bdevs": 2, 00:18:51.855 "num_base_bdevs_discovered": 2, 00:18:51.855 "num_base_bdevs_operational": 2, 00:18:51.855 "base_bdevs_list": [ 00:18:51.855 { 00:18:51.855 "name": "spare", 00:18:51.855 "uuid": "1a03d954-efbc-50f1-93ff-e1ecbd56a416", 00:18:51.855 "is_configured": true, 00:18:51.855 "data_offset": 2048, 00:18:51.855 "data_size": 63488 00:18:51.855 }, 00:18:51.855 { 00:18:51.855 "name": "BaseBdev2", 00:18:51.855 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:51.855 "is_configured": true, 00:18:51.855 "data_offset": 2048, 00:18:51.855 "data_size": 63488 00:18:51.855 } 00:18:51.855 ] 00:18:51.855 }' 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.855 13:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.421 "name": "raid_bdev1", 00:18:52.421 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:52.421 "strip_size_kb": 0, 00:18:52.421 "state": "online", 00:18:52.421 "raid_level": "raid1", 00:18:52.421 "superblock": true, 00:18:52.421 "num_base_bdevs": 2, 00:18:52.421 "num_base_bdevs_discovered": 2, 00:18:52.421 "num_base_bdevs_operational": 2, 00:18:52.421 "base_bdevs_list": [ 00:18:52.421 { 00:18:52.421 "name": "spare", 00:18:52.421 "uuid": "1a03d954-efbc-50f1-93ff-e1ecbd56a416", 00:18:52.421 "is_configured": true, 00:18:52.421 "data_offset": 2048, 00:18:52.421 "data_size": 63488 00:18:52.421 }, 00:18:52.421 { 00:18:52.421 "name": "BaseBdev2", 00:18:52.421 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:52.421 "is_configured": true, 00:18:52.421 "data_offset": 2048, 00:18:52.421 "data_size": 63488 00:18:52.421 } 00:18:52.421 ] 00:18:52.421 }' 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.421 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.680 [2024-12-06 13:13:39.538948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.680 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.681 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.681 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.681 "name": "raid_bdev1", 00:18:52.681 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:52.681 "strip_size_kb": 0, 00:18:52.681 "state": "online", 00:18:52.681 "raid_level": "raid1", 00:18:52.681 "superblock": true, 00:18:52.681 "num_base_bdevs": 2, 00:18:52.681 "num_base_bdevs_discovered": 1, 00:18:52.681 "num_base_bdevs_operational": 1, 00:18:52.681 "base_bdevs_list": [ 00:18:52.681 { 00:18:52.681 "name": null, 00:18:52.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.681 "is_configured": false, 00:18:52.681 "data_offset": 0, 00:18:52.681 "data_size": 63488 00:18:52.681 }, 00:18:52.681 { 00:18:52.681 "name": "BaseBdev2", 00:18:52.681 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:52.681 "is_configured": true, 00:18:52.681 "data_offset": 2048, 00:18:52.681 "data_size": 63488 00:18:52.681 } 00:18:52.681 ] 00:18:52.681 }' 00:18:52.681 13:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.681 13:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.248 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:53.248 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.248 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.248 [2024-12-06 13:13:40.067145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:53.248 [2024-12-06 13:13:40.067474] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:53.248 [2024-12-06 13:13:40.067537] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:53.248 [2024-12-06 13:13:40.067799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:53.248 [2024-12-06 13:13:40.085224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:18:53.248 13:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.248 13:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:53.248 [2024-12-06 13:13:40.088483] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.184 "name": "raid_bdev1", 00:18:54.184 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:54.184 "strip_size_kb": 0, 00:18:54.184 "state": "online", 00:18:54.184 "raid_level": "raid1", 00:18:54.184 "superblock": true, 00:18:54.184 "num_base_bdevs": 2, 00:18:54.184 "num_base_bdevs_discovered": 2, 00:18:54.184 "num_base_bdevs_operational": 2, 00:18:54.184 "process": { 00:18:54.184 "type": "rebuild", 00:18:54.184 "target": "spare", 00:18:54.184 "progress": { 00:18:54.184 "blocks": 18432, 00:18:54.184 "percent": 29 00:18:54.184 } 00:18:54.184 }, 00:18:54.184 "base_bdevs_list": [ 00:18:54.184 { 00:18:54.184 "name": "spare", 00:18:54.184 "uuid": "1a03d954-efbc-50f1-93ff-e1ecbd56a416", 00:18:54.184 "is_configured": true, 00:18:54.184 "data_offset": 2048, 00:18:54.184 "data_size": 63488 00:18:54.184 }, 00:18:54.184 { 00:18:54.184 "name": "BaseBdev2", 00:18:54.184 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:54.184 "is_configured": true, 00:18:54.184 "data_offset": 2048, 00:18:54.184 "data_size": 63488 00:18:54.184 } 00:18:54.184 ] 00:18:54.184 }' 00:18:54.184 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.443 [2024-12-06 13:13:41.262729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:54.443 [2024-12-06 13:13:41.300895] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:54.443 [2024-12-06 13:13:41.301022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.443 [2024-12-06 13:13:41.301048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:54.443 [2024-12-06 13:13:41.301064] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.443 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.443 "name": "raid_bdev1", 00:18:54.443 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:54.443 "strip_size_kb": 0, 00:18:54.443 "state": "online", 00:18:54.443 "raid_level": "raid1", 00:18:54.443 "superblock": true, 00:18:54.443 "num_base_bdevs": 2, 00:18:54.443 "num_base_bdevs_discovered": 1, 00:18:54.443 "num_base_bdevs_operational": 1, 00:18:54.443 "base_bdevs_list": [ 00:18:54.444 { 00:18:54.444 "name": null, 00:18:54.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.444 "is_configured": false, 00:18:54.444 "data_offset": 0, 00:18:54.444 "data_size": 63488 00:18:54.444 }, 00:18:54.444 { 00:18:54.444 "name": "BaseBdev2", 00:18:54.444 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:54.444 "is_configured": true, 00:18:54.444 "data_offset": 2048, 00:18:54.444 "data_size": 63488 00:18:54.444 } 00:18:54.444 ] 00:18:54.444 }' 00:18:54.444 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.444 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.012 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:55.012 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.012 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.012 [2024-12-06 13:13:41.876377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:55.012 [2024-12-06 13:13:41.876533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.012 [2024-12-06 13:13:41.876571] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:55.012 [2024-12-06 13:13:41.876590] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.012 [2024-12-06 13:13:41.877292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.012 [2024-12-06 13:13:41.877352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:55.012 [2024-12-06 13:13:41.877500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:55.012 [2024-12-06 13:13:41.877530] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:55.012 [2024-12-06 13:13:41.877546] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:55.012 [2024-12-06 13:13:41.877587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.012 [2024-12-06 13:13:41.895908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:18:55.012 spare 00:18:55.012 13:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.012 13:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:55.012 [2024-12-06 13:13:41.898978] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.949 "name": "raid_bdev1", 00:18:55.949 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:55.949 "strip_size_kb": 0, 00:18:55.949 "state": "online", 00:18:55.949 "raid_level": "raid1", 00:18:55.949 "superblock": true, 00:18:55.949 "num_base_bdevs": 2, 00:18:55.949 "num_base_bdevs_discovered": 2, 00:18:55.949 "num_base_bdevs_operational": 2, 00:18:55.949 "process": { 00:18:55.949 "type": "rebuild", 00:18:55.949 "target": "spare", 00:18:55.949 "progress": { 00:18:55.949 "blocks": 20480, 00:18:55.949 "percent": 32 00:18:55.949 } 00:18:55.949 }, 00:18:55.949 "base_bdevs_list": [ 00:18:55.949 { 00:18:55.949 "name": "spare", 00:18:55.949 "uuid": "1a03d954-efbc-50f1-93ff-e1ecbd56a416", 00:18:55.949 "is_configured": true, 00:18:55.949 "data_offset": 2048, 00:18:55.949 "data_size": 63488 00:18:55.949 }, 00:18:55.949 { 00:18:55.949 "name": "BaseBdev2", 00:18:55.949 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:55.949 "is_configured": true, 00:18:55.949 "data_offset": 2048, 00:18:55.949 "data_size": 63488 00:18:55.949 } 00:18:55.949 ] 00:18:55.949 }' 00:18:55.949 13:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.209 [2024-12-06 13:13:43.069273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.209 [2024-12-06 13:13:43.109954] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:56.209 [2024-12-06 13:13:43.110057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.209 [2024-12-06 13:13:43.110082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:56.209 [2024-12-06 13:13:43.110093] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.209 "name": "raid_bdev1", 00:18:56.209 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:56.209 "strip_size_kb": 0, 00:18:56.209 "state": "online", 00:18:56.209 "raid_level": "raid1", 00:18:56.209 "superblock": true, 00:18:56.209 "num_base_bdevs": 2, 00:18:56.209 "num_base_bdevs_discovered": 1, 00:18:56.209 "num_base_bdevs_operational": 1, 00:18:56.209 "base_bdevs_list": [ 00:18:56.209 { 00:18:56.209 "name": null, 00:18:56.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.209 "is_configured": false, 00:18:56.209 "data_offset": 0, 00:18:56.209 "data_size": 63488 00:18:56.209 }, 00:18:56.209 { 00:18:56.209 "name": "BaseBdev2", 00:18:56.209 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:56.209 "is_configured": true, 00:18:56.209 "data_offset": 2048, 00:18:56.209 "data_size": 63488 00:18:56.209 } 00:18:56.209 ] 00:18:56.209 }' 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.209 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.777 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:56.777 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.777 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:56.777 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:56.777 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.777 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.777 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.777 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.777 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.777 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.777 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.777 "name": "raid_bdev1", 00:18:56.777 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:56.777 "strip_size_kb": 0, 00:18:56.777 "state": "online", 00:18:56.777 "raid_level": "raid1", 00:18:56.777 "superblock": true, 00:18:56.777 "num_base_bdevs": 2, 00:18:56.777 "num_base_bdevs_discovered": 1, 00:18:56.777 "num_base_bdevs_operational": 1, 00:18:56.777 "base_bdevs_list": [ 00:18:56.777 { 00:18:56.777 "name": null, 00:18:56.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.777 "is_configured": false, 00:18:56.778 "data_offset": 0, 00:18:56.778 "data_size": 63488 00:18:56.778 }, 00:18:56.778 { 00:18:56.778 "name": "BaseBdev2", 00:18:56.778 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:56.778 "is_configured": true, 00:18:56.778 "data_offset": 2048, 00:18:56.778 "data_size": 63488 00:18:56.778 } 00:18:56.778 ] 00:18:56.778 }' 00:18:56.778 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.778 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:56.778 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.037 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.037 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:57.037 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.037 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.037 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.037 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:57.037 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.037 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.037 [2024-12-06 13:13:43.840267] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:57.037 [2024-12-06 13:13:43.840372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.037 [2024-12-06 13:13:43.840422] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:57.037 [2024-12-06 13:13:43.840449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.037 [2024-12-06 13:13:43.841150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.037 [2024-12-06 13:13:43.841210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:57.037 [2024-12-06 13:13:43.841326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:57.037 [2024-12-06 13:13:43.841349] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:57.037 [2024-12-06 13:13:43.841379] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:57.037 [2024-12-06 13:13:43.841424] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:57.037 BaseBdev1 00:18:57.037 13:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.037 13:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.973 "name": "raid_bdev1", 00:18:57.973 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:57.973 "strip_size_kb": 0, 00:18:57.973 "state": "online", 00:18:57.973 "raid_level": "raid1", 00:18:57.973 "superblock": true, 00:18:57.973 "num_base_bdevs": 2, 00:18:57.973 "num_base_bdevs_discovered": 1, 00:18:57.973 "num_base_bdevs_operational": 1, 00:18:57.973 "base_bdevs_list": [ 00:18:57.973 { 00:18:57.973 "name": null, 00:18:57.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.973 "is_configured": false, 00:18:57.973 "data_offset": 0, 00:18:57.973 "data_size": 63488 00:18:57.973 }, 00:18:57.973 { 00:18:57.973 "name": "BaseBdev2", 00:18:57.973 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:57.973 "is_configured": true, 00:18:57.973 "data_offset": 2048, 00:18:57.973 "data_size": 63488 00:18:57.973 } 00:18:57.973 ] 00:18:57.973 }' 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.973 13:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.539 "name": "raid_bdev1", 00:18:58.539 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:58.539 "strip_size_kb": 0, 00:18:58.539 "state": "online", 00:18:58.539 "raid_level": "raid1", 00:18:58.539 "superblock": true, 00:18:58.539 "num_base_bdevs": 2, 00:18:58.539 "num_base_bdevs_discovered": 1, 00:18:58.539 "num_base_bdevs_operational": 1, 00:18:58.539 "base_bdevs_list": [ 00:18:58.539 { 00:18:58.539 "name": null, 00:18:58.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.539 "is_configured": false, 00:18:58.539 "data_offset": 0, 00:18:58.539 "data_size": 63488 00:18:58.539 }, 00:18:58.539 { 00:18:58.539 "name": "BaseBdev2", 00:18:58.539 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:58.539 "is_configured": true, 00:18:58.539 "data_offset": 2048, 00:18:58.539 "data_size": 63488 00:18:58.539 } 00:18:58.539 ] 00:18:58.539 }' 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.539 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:58.796 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.796 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.796 [2024-12-06 13:13:45.561022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.796 [2024-12-06 13:13:45.561356] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:58.796 [2024-12-06 13:13:45.561403] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:58.796 request: 00:18:58.796 { 00:18:58.796 "base_bdev": "BaseBdev1", 00:18:58.796 "raid_bdev": "raid_bdev1", 00:18:58.796 "method": "bdev_raid_add_base_bdev", 00:18:58.796 "req_id": 1 00:18:58.796 } 00:18:58.796 Got JSON-RPC error response 00:18:58.797 response: 00:18:58.797 { 00:18:58.797 "code": -22, 00:18:58.797 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:58.797 } 00:18:58.797 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:58.797 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:58.797 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.797 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.797 13:13:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.797 13:13:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.732 "name": "raid_bdev1", 00:18:59.732 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:18:59.732 "strip_size_kb": 0, 00:18:59.732 "state": "online", 00:18:59.732 "raid_level": "raid1", 00:18:59.732 "superblock": true, 00:18:59.732 "num_base_bdevs": 2, 00:18:59.732 "num_base_bdevs_discovered": 1, 00:18:59.732 "num_base_bdevs_operational": 1, 00:18:59.732 "base_bdevs_list": [ 00:18:59.732 { 00:18:59.732 "name": null, 00:18:59.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.732 "is_configured": false, 00:18:59.732 "data_offset": 0, 00:18:59.732 "data_size": 63488 00:18:59.732 }, 00:18:59.732 { 00:18:59.732 "name": "BaseBdev2", 00:18:59.732 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:18:59.732 "is_configured": true, 00:18:59.732 "data_offset": 2048, 00:18:59.732 "data_size": 63488 00:18:59.732 } 00:18:59.732 ] 00:18:59.732 }' 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.732 13:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.298 "name": "raid_bdev1", 00:19:00.298 "uuid": "1e84344e-53de-4863-b777-ccd85111c546", 00:19:00.298 "strip_size_kb": 0, 00:19:00.298 "state": "online", 00:19:00.298 "raid_level": "raid1", 00:19:00.298 "superblock": true, 00:19:00.298 "num_base_bdevs": 2, 00:19:00.298 "num_base_bdevs_discovered": 1, 00:19:00.298 "num_base_bdevs_operational": 1, 00:19:00.298 "base_bdevs_list": [ 00:19:00.298 { 00:19:00.298 "name": null, 00:19:00.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.298 "is_configured": false, 00:19:00.298 "data_offset": 0, 00:19:00.298 "data_size": 63488 00:19:00.298 }, 00:19:00.298 { 00:19:00.298 "name": "BaseBdev2", 00:19:00.298 "uuid": "1accfcba-82b0-56e5-9ecc-a1f1252e698c", 00:19:00.298 "is_configured": true, 00:19:00.298 "data_offset": 2048, 00:19:00.298 "data_size": 63488 00:19:00.298 } 00:19:00.298 ] 00:19:00.298 }' 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76202 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76202 ']' 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76202 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76202 00:19:00.298 killing process with pid 76202 00:19:00.298 Received shutdown signal, test time was about 60.000000 seconds 00:19:00.298 00:19:00.298 Latency(us) 00:19:00.298 [2024-12-06T13:13:47.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.298 [2024-12-06T13:13:47.314Z] =================================================================================================================== 00:19:00.298 [2024-12-06T13:13:47.314Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76202' 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76202 00:19:00.298 [2024-12-06 13:13:47.297312] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:00.298 13:13:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76202 00:19:00.298 [2024-12-06 13:13:47.297551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.298 [2024-12-06 13:13:47.297632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.298 [2024-12-06 13:13:47.297685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:00.556 [2024-12-06 13:13:47.556973] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:01.931 00:19:01.931 real 0m27.408s 00:19:01.931 user 0m33.766s 00:19:01.931 sys 0m4.210s 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.931 ************************************ 00:19:01.931 END TEST raid_rebuild_test_sb 00:19:01.931 ************************************ 00:19:01.931 13:13:48 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:19:01.931 13:13:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:01.931 13:13:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.931 13:13:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.931 ************************************ 00:19:01.931 START TEST raid_rebuild_test_io 00:19:01.931 ************************************ 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76973 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76973 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76973 ']' 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.931 13:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.931 [2024-12-06 13:13:48.839573] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:19:01.931 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:01.931 Zero copy mechanism will not be used. 00:19:01.931 [2024-12-06 13:13:48.839792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76973 ] 00:19:02.189 [2024-12-06 13:13:49.029876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.189 [2024-12-06 13:13:49.167580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.447 [2024-12-06 13:13:49.385871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.447 [2024-12-06 13:13:49.385952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.015 BaseBdev1_malloc 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.015 [2024-12-06 13:13:49.884908] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:03.015 [2024-12-06 13:13:49.884998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.015 [2024-12-06 13:13:49.885030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:03.015 [2024-12-06 13:13:49.885050] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.015 [2024-12-06 13:13:49.888201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.015 [2024-12-06 13:13:49.888276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:03.015 BaseBdev1 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.015 BaseBdev2_malloc 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.015 [2024-12-06 13:13:49.944274] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:03.015 [2024-12-06 13:13:49.944374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.015 [2024-12-06 13:13:49.944405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:03.015 [2024-12-06 13:13:49.944440] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.015 [2024-12-06 13:13:49.947622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.015 [2024-12-06 13:13:49.947680] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:03.015 BaseBdev2 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.015 13:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.015 spare_malloc 00:19:03.015 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.015 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:03.015 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.015 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.273 spare_delay 00:19:03.273 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.273 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:03.273 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.273 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.274 [2024-12-06 13:13:50.043625] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:03.274 [2024-12-06 13:13:50.043736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.274 [2024-12-06 13:13:50.043767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:03.274 [2024-12-06 13:13:50.043785] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.274 [2024-12-06 13:13:50.046970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.274 [2024-12-06 13:13:50.047019] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:03.274 spare 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.274 [2024-12-06 13:13:50.051750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.274 [2024-12-06 13:13:50.054452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.274 [2024-12-06 13:13:50.054626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:03.274 [2024-12-06 13:13:50.054647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:03.274 [2024-12-06 13:13:50.055029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:03.274 [2024-12-06 13:13:50.055287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:03.274 [2024-12-06 13:13:50.055319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:03.274 [2024-12-06 13:13:50.055537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.274 "name": "raid_bdev1", 00:19:03.274 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:03.274 "strip_size_kb": 0, 00:19:03.274 "state": "online", 00:19:03.274 "raid_level": "raid1", 00:19:03.274 "superblock": false, 00:19:03.274 "num_base_bdevs": 2, 00:19:03.274 "num_base_bdevs_discovered": 2, 00:19:03.274 "num_base_bdevs_operational": 2, 00:19:03.274 "base_bdevs_list": [ 00:19:03.274 { 00:19:03.274 "name": "BaseBdev1", 00:19:03.274 "uuid": "82476588-57c3-53b0-8f7e-e6c432ffccc3", 00:19:03.274 "is_configured": true, 00:19:03.274 "data_offset": 0, 00:19:03.274 "data_size": 65536 00:19:03.274 }, 00:19:03.274 { 00:19:03.274 "name": "BaseBdev2", 00:19:03.274 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:03.274 "is_configured": true, 00:19:03.274 "data_offset": 0, 00:19:03.274 "data_size": 65536 00:19:03.274 } 00:19:03.274 ] 00:19:03.274 }' 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.274 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.532 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.532 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:03.532 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.532 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.532 [2024-12-06 13:13:50.540364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.791 [2024-12-06 13:13:50.635970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.791 "name": "raid_bdev1", 00:19:03.791 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:03.791 "strip_size_kb": 0, 00:19:03.791 "state": "online", 00:19:03.791 "raid_level": "raid1", 00:19:03.791 "superblock": false, 00:19:03.791 "num_base_bdevs": 2, 00:19:03.791 "num_base_bdevs_discovered": 1, 00:19:03.791 "num_base_bdevs_operational": 1, 00:19:03.791 "base_bdevs_list": [ 00:19:03.791 { 00:19:03.791 "name": null, 00:19:03.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.791 "is_configured": false, 00:19:03.791 "data_offset": 0, 00:19:03.791 "data_size": 65536 00:19:03.791 }, 00:19:03.791 { 00:19:03.791 "name": "BaseBdev2", 00:19:03.791 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:03.791 "is_configured": true, 00:19:03.791 "data_offset": 0, 00:19:03.791 "data_size": 65536 00:19:03.791 } 00:19:03.791 ] 00:19:03.791 }' 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.791 13:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.792 [2024-12-06 13:13:50.749356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:03.792 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:03.792 Zero copy mechanism will not be used. 00:19:03.792 Running I/O for 60 seconds... 00:19:04.358 13:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:04.358 13:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.359 13:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:04.359 [2024-12-06 13:13:51.123931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.359 13:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.359 13:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:04.359 [2024-12-06 13:13:51.203287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:04.359 [2024-12-06 13:13:51.206052] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.359 [2024-12-06 13:13:51.326878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:04.617 [2024-12-06 13:13:51.478353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:04.875 [2024-12-06 13:13:51.731293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:05.133 208.00 IOPS, 624.00 MiB/s [2024-12-06T13:13:52.149Z] [2024-12-06 13:13:51.935508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:05.133 [2024-12-06 13:13:51.936033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:05.391 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.391 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.391 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.391 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.391 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.391 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.391 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.392 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.392 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.392 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.392 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.392 "name": "raid_bdev1", 00:19:05.392 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:05.392 "strip_size_kb": 0, 00:19:05.392 "state": "online", 00:19:05.392 "raid_level": "raid1", 00:19:05.392 "superblock": false, 00:19:05.392 "num_base_bdevs": 2, 00:19:05.392 "num_base_bdevs_discovered": 2, 00:19:05.392 "num_base_bdevs_operational": 2, 00:19:05.392 "process": { 00:19:05.392 "type": "rebuild", 00:19:05.392 "target": "spare", 00:19:05.392 "progress": { 00:19:05.392 "blocks": 12288, 00:19:05.392 "percent": 18 00:19:05.392 } 00:19:05.392 }, 00:19:05.392 "base_bdevs_list": [ 00:19:05.392 { 00:19:05.392 "name": "spare", 00:19:05.392 "uuid": "a5b2f7e5-849e-51c1-a82d-bacc357d4537", 00:19:05.392 "is_configured": true, 00:19:05.392 "data_offset": 0, 00:19:05.392 "data_size": 65536 00:19:05.392 }, 00:19:05.392 { 00:19:05.392 "name": "BaseBdev2", 00:19:05.392 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:05.392 "is_configured": true, 00:19:05.392 "data_offset": 0, 00:19:05.392 "data_size": 65536 00:19:05.392 } 00:19:05.392 ] 00:19:05.392 }' 00:19:05.392 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.392 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.392 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.392 [2024-12-06 13:13:52.305875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:05.392 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.392 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:05.392 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.392 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.392 [2024-12-06 13:13:52.342156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.650 [2024-12-06 13:13:52.432865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:05.650 [2024-12-06 13:13:52.433391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:05.650 [2024-12-06 13:13:52.543484] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:05.650 [2024-12-06 13:13:52.555631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.650 [2024-12-06 13:13:52.555755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.650 [2024-12-06 13:13:52.555780] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:05.650 [2024-12-06 13:13:52.590660] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:19:05.650 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.650 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:05.650 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.651 "name": "raid_bdev1", 00:19:05.651 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:05.651 "strip_size_kb": 0, 00:19:05.651 "state": "online", 00:19:05.651 "raid_level": "raid1", 00:19:05.651 "superblock": false, 00:19:05.651 "num_base_bdevs": 2, 00:19:05.651 "num_base_bdevs_discovered": 1, 00:19:05.651 "num_base_bdevs_operational": 1, 00:19:05.651 "base_bdevs_list": [ 00:19:05.651 { 00:19:05.651 "name": null, 00:19:05.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.651 "is_configured": false, 00:19:05.651 "data_offset": 0, 00:19:05.651 "data_size": 65536 00:19:05.651 }, 00:19:05.651 { 00:19:05.651 "name": "BaseBdev2", 00:19:05.651 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:05.651 "is_configured": true, 00:19:05.651 "data_offset": 0, 00:19:05.651 "data_size": 65536 00:19:05.651 } 00:19:05.651 ] 00:19:05.651 }' 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.651 13:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:06.180 144.50 IOPS, 433.50 MiB/s [2024-12-06T13:13:53.196Z] 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:06.180 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.180 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:06.180 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:06.180 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.180 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.180 13:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.180 13:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:06.180 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.180 13:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.180 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.180 "name": "raid_bdev1", 00:19:06.180 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:06.180 "strip_size_kb": 0, 00:19:06.180 "state": "online", 00:19:06.180 "raid_level": "raid1", 00:19:06.180 "superblock": false, 00:19:06.180 "num_base_bdevs": 2, 00:19:06.180 "num_base_bdevs_discovered": 1, 00:19:06.180 "num_base_bdevs_operational": 1, 00:19:06.180 "base_bdevs_list": [ 00:19:06.180 { 00:19:06.180 "name": null, 00:19:06.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.180 "is_configured": false, 00:19:06.180 "data_offset": 0, 00:19:06.180 "data_size": 65536 00:19:06.180 }, 00:19:06.180 { 00:19:06.180 "name": "BaseBdev2", 00:19:06.180 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:06.180 "is_configured": true, 00:19:06.180 "data_offset": 0, 00:19:06.180 "data_size": 65536 00:19:06.180 } 00:19:06.180 ] 00:19:06.180 }' 00:19:06.180 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.439 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:06.439 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.439 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:06.439 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:06.439 13:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.439 13:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:06.439 [2024-12-06 13:13:53.280737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.439 13:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.439 13:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:06.439 [2024-12-06 13:13:53.345435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:06.439 [2024-12-06 13:13:53.348398] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:06.698 [2024-12-06 13:13:53.457504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:06.698 [2024-12-06 13:13:53.458483] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:06.698 [2024-12-06 13:13:53.586555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:06.698 [2024-12-06 13:13:53.587189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:06.957 147.67 IOPS, 443.00 MiB/s [2024-12-06T13:13:53.973Z] [2024-12-06 13:13:53.969922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:07.216 [2024-12-06 13:13:54.200235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.475 "name": "raid_bdev1", 00:19:07.475 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:07.475 "strip_size_kb": 0, 00:19:07.475 "state": "online", 00:19:07.475 "raid_level": "raid1", 00:19:07.475 "superblock": false, 00:19:07.475 "num_base_bdevs": 2, 00:19:07.475 "num_base_bdevs_discovered": 2, 00:19:07.475 "num_base_bdevs_operational": 2, 00:19:07.475 "process": { 00:19:07.475 "type": "rebuild", 00:19:07.475 "target": "spare", 00:19:07.475 "progress": { 00:19:07.475 "blocks": 10240, 00:19:07.475 "percent": 15 00:19:07.475 } 00:19:07.475 }, 00:19:07.475 "base_bdevs_list": [ 00:19:07.475 { 00:19:07.475 "name": "spare", 00:19:07.475 "uuid": "a5b2f7e5-849e-51c1-a82d-bacc357d4537", 00:19:07.475 "is_configured": true, 00:19:07.475 "data_offset": 0, 00:19:07.475 "data_size": 65536 00:19:07.475 }, 00:19:07.475 { 00:19:07.475 "name": "BaseBdev2", 00:19:07.475 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:07.475 "is_configured": true, 00:19:07.475 "data_offset": 0, 00:19:07.475 "data_size": 65536 00:19:07.475 } 00:19:07.475 ] 00:19:07.475 }' 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.475 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=445 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.735 [2024-12-06 13:13:54.541804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.735 "name": "raid_bdev1", 00:19:07.735 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:07.735 "strip_size_kb": 0, 00:19:07.735 "state": "online", 00:19:07.735 "raid_level": "raid1", 00:19:07.735 "superblock": false, 00:19:07.735 "num_base_bdevs": 2, 00:19:07.735 "num_base_bdevs_discovered": 2, 00:19:07.735 "num_base_bdevs_operational": 2, 00:19:07.735 "process": { 00:19:07.735 "type": "rebuild", 00:19:07.735 "target": "spare", 00:19:07.735 "progress": { 00:19:07.735 "blocks": 12288, 00:19:07.735 "percent": 18 00:19:07.735 } 00:19:07.735 }, 00:19:07.735 "base_bdevs_list": [ 00:19:07.735 { 00:19:07.735 "name": "spare", 00:19:07.735 "uuid": "a5b2f7e5-849e-51c1-a82d-bacc357d4537", 00:19:07.735 "is_configured": true, 00:19:07.735 "data_offset": 0, 00:19:07.735 "data_size": 65536 00:19:07.735 }, 00:19:07.735 { 00:19:07.735 "name": "BaseBdev2", 00:19:07.735 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:07.735 "is_configured": true, 00:19:07.735 "data_offset": 0, 00:19:07.735 "data_size": 65536 00:19:07.735 } 00:19:07.735 ] 00:19:07.735 }' 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.735 13:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:07.995 135.50 IOPS, 406.50 MiB/s [2024-12-06T13:13:55.011Z] [2024-12-06 13:13:54.875723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:07.995 [2024-12-06 13:13:54.876663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:08.255 [2024-12-06 13:13:55.088244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:08.537 [2024-12-06 13:13:55.411583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:08.537 [2024-12-06 13:13:55.412564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.817 "name": "raid_bdev1", 00:19:08.817 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:08.817 "strip_size_kb": 0, 00:19:08.817 "state": "online", 00:19:08.817 "raid_level": "raid1", 00:19:08.817 "superblock": false, 00:19:08.817 "num_base_bdevs": 2, 00:19:08.817 "num_base_bdevs_discovered": 2, 00:19:08.817 "num_base_bdevs_operational": 2, 00:19:08.817 "process": { 00:19:08.817 "type": "rebuild", 00:19:08.817 "target": "spare", 00:19:08.817 "progress": { 00:19:08.817 "blocks": 26624, 00:19:08.817 "percent": 40 00:19:08.817 } 00:19:08.817 }, 00:19:08.817 "base_bdevs_list": [ 00:19:08.817 { 00:19:08.817 "name": "spare", 00:19:08.817 "uuid": "a5b2f7e5-849e-51c1-a82d-bacc357d4537", 00:19:08.817 "is_configured": true, 00:19:08.817 "data_offset": 0, 00:19:08.817 "data_size": 65536 00:19:08.817 }, 00:19:08.817 { 00:19:08.817 "name": "BaseBdev2", 00:19:08.817 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:08.817 "is_configured": true, 00:19:08.817 "data_offset": 0, 00:19:08.817 "data_size": 65536 00:19:08.817 } 00:19:08.817 ] 00:19:08.817 }' 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:08.817 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.817 117.20 IOPS, 351.60 MiB/s [2024-12-06T13:13:55.833Z] 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.818 13:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:09.076 [2024-12-06 13:13:55.897906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:09.076 [2024-12-06 13:13:55.898915] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:09.335 [2024-12-06 13:13:56.127814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:09.903 [2024-12-06 13:13:56.735743] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:09.903 107.33 IOPS, 322.00 MiB/s [2024-12-06T13:13:56.919Z] 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:09.903 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.903 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.903 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.903 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.903 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.903 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.903 13:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.903 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.903 13:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.903 [2024-12-06 13:13:56.846815] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:09.903 [2024-12-06 13:13:56.847233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:09.903 13:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.903 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.903 "name": "raid_bdev1", 00:19:09.903 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:09.903 "strip_size_kb": 0, 00:19:09.903 "state": "online", 00:19:09.903 "raid_level": "raid1", 00:19:09.903 "superblock": false, 00:19:09.904 "num_base_bdevs": 2, 00:19:09.904 "num_base_bdevs_discovered": 2, 00:19:09.904 "num_base_bdevs_operational": 2, 00:19:09.904 "process": { 00:19:09.904 "type": "rebuild", 00:19:09.904 "target": "spare", 00:19:09.904 "progress": { 00:19:09.904 "blocks": 45056, 00:19:09.904 "percent": 68 00:19:09.904 } 00:19:09.904 }, 00:19:09.904 "base_bdevs_list": [ 00:19:09.904 { 00:19:09.904 "name": "spare", 00:19:09.904 "uuid": "a5b2f7e5-849e-51c1-a82d-bacc357d4537", 00:19:09.904 "is_configured": true, 00:19:09.904 "data_offset": 0, 00:19:09.904 "data_size": 65536 00:19:09.904 }, 00:19:09.904 { 00:19:09.904 "name": "BaseBdev2", 00:19:09.904 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:09.904 "is_configured": true, 00:19:09.904 "data_offset": 0, 00:19:09.904 "data_size": 65536 00:19:09.904 } 00:19:09.904 ] 00:19:09.904 }' 00:19:09.904 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.163 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.163 13:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.163 13:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.163 13:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:10.729 [2024-12-06 13:13:57.505866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:19:10.729 [2024-12-06 13:13:57.506568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:19:10.729 [2024-12-06 13:13:57.711281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:11.248 97.14 IOPS, 291.43 MiB/s [2024-12-06T13:13:58.264Z] 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.248 [2024-12-06 13:13:58.079189] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.248 "name": "raid_bdev1", 00:19:11.248 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:11.248 "strip_size_kb": 0, 00:19:11.248 "state": "online", 00:19:11.248 "raid_level": "raid1", 00:19:11.248 "superblock": false, 00:19:11.248 "num_base_bdevs": 2, 00:19:11.248 "num_base_bdevs_discovered": 2, 00:19:11.248 "num_base_bdevs_operational": 2, 00:19:11.248 "process": { 00:19:11.248 "type": "rebuild", 00:19:11.248 "target": "spare", 00:19:11.248 "progress": { 00:19:11.248 "blocks": 63488, 00:19:11.248 "percent": 96 00:19:11.248 } 00:19:11.248 }, 00:19:11.248 "base_bdevs_list": [ 00:19:11.248 { 00:19:11.248 "name": "spare", 00:19:11.248 "uuid": "a5b2f7e5-849e-51c1-a82d-bacc357d4537", 00:19:11.248 "is_configured": true, 00:19:11.248 "data_offset": 0, 00:19:11.248 "data_size": 65536 00:19:11.248 }, 00:19:11.248 { 00:19:11.248 "name": "BaseBdev2", 00:19:11.248 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:11.248 "is_configured": true, 00:19:11.248 "data_offset": 0, 00:19:11.248 "data_size": 65536 00:19:11.248 } 00:19:11.248 ] 00:19:11.248 }' 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.248 [2024-12-06 13:13:58.179195] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.248 13:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:11.248 [2024-12-06 13:13:58.191534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.384 89.62 IOPS, 268.88 MiB/s [2024-12-06T13:13:59.400Z] 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.384 "name": "raid_bdev1", 00:19:12.384 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:12.384 "strip_size_kb": 0, 00:19:12.384 "state": "online", 00:19:12.384 "raid_level": "raid1", 00:19:12.384 "superblock": false, 00:19:12.384 "num_base_bdevs": 2, 00:19:12.384 "num_base_bdevs_discovered": 2, 00:19:12.384 "num_base_bdevs_operational": 2, 00:19:12.384 "base_bdevs_list": [ 00:19:12.384 { 00:19:12.384 "name": "spare", 00:19:12.384 "uuid": "a5b2f7e5-849e-51c1-a82d-bacc357d4537", 00:19:12.384 "is_configured": true, 00:19:12.384 "data_offset": 0, 00:19:12.384 "data_size": 65536 00:19:12.384 }, 00:19:12.384 { 00:19:12.384 "name": "BaseBdev2", 00:19:12.384 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:12.384 "is_configured": true, 00:19:12.384 "data_offset": 0, 00:19:12.384 "data_size": 65536 00:19:12.384 } 00:19:12.384 ] 00:19:12.384 }' 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.384 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.643 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.643 "name": "raid_bdev1", 00:19:12.643 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:12.643 "strip_size_kb": 0, 00:19:12.643 "state": "online", 00:19:12.643 "raid_level": "raid1", 00:19:12.643 "superblock": false, 00:19:12.643 "num_base_bdevs": 2, 00:19:12.643 "num_base_bdevs_discovered": 2, 00:19:12.643 "num_base_bdevs_operational": 2, 00:19:12.643 "base_bdevs_list": [ 00:19:12.643 { 00:19:12.643 "name": "spare", 00:19:12.643 "uuid": "a5b2f7e5-849e-51c1-a82d-bacc357d4537", 00:19:12.643 "is_configured": true, 00:19:12.643 "data_offset": 0, 00:19:12.643 "data_size": 65536 00:19:12.643 }, 00:19:12.643 { 00:19:12.643 "name": "BaseBdev2", 00:19:12.643 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:12.643 "is_configured": true, 00:19:12.643 "data_offset": 0, 00:19:12.643 "data_size": 65536 00:19:12.643 } 00:19:12.643 ] 00:19:12.643 }' 00:19:12.643 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.643 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.643 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.644 "name": "raid_bdev1", 00:19:12.644 "uuid": "5fa74129-d990-4a6e-af39-bc958f9687e8", 00:19:12.644 "strip_size_kb": 0, 00:19:12.644 "state": "online", 00:19:12.644 "raid_level": "raid1", 00:19:12.644 "superblock": false, 00:19:12.644 "num_base_bdevs": 2, 00:19:12.644 "num_base_bdevs_discovered": 2, 00:19:12.644 "num_base_bdevs_operational": 2, 00:19:12.644 "base_bdevs_list": [ 00:19:12.644 { 00:19:12.644 "name": "spare", 00:19:12.644 "uuid": "a5b2f7e5-849e-51c1-a82d-bacc357d4537", 00:19:12.644 "is_configured": true, 00:19:12.644 "data_offset": 0, 00:19:12.644 "data_size": 65536 00:19:12.644 }, 00:19:12.644 { 00:19:12.644 "name": "BaseBdev2", 00:19:12.644 "uuid": "de5d97a4-7d17-5f7b-84ba-8e8744bb38e2", 00:19:12.644 "is_configured": true, 00:19:12.644 "data_offset": 0, 00:19:12.644 "data_size": 65536 00:19:12.644 } 00:19:12.644 ] 00:19:12.644 }' 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.644 13:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:13.161 82.89 IOPS, 248.67 MiB/s [2024-12-06T13:14:00.177Z] 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:13.161 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.161 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:13.161 [2024-12-06 13:14:00.018163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:13.161 [2024-12-06 13:14:00.018212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.161 00:19:13.161 Latency(us) 00:19:13.161 [2024-12-06T13:14:00.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.161 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:13.161 raid_bdev1 : 9.33 80.68 242.03 0.00 0.00 17572.25 296.03 119156.36 00:19:13.161 [2024-12-06T13:14:00.177Z] =================================================================================================================== 00:19:13.161 [2024-12-06T13:14:00.177Z] Total : 80.68 242.03 0.00 0.00 17572.25 296.03 119156.36 00:19:13.161 [2024-12-06 13:14:00.106923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.161 [2024-12-06 13:14:00.107044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.161 [2024-12-06 13:14:00.107170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.161 [2024-12-06 13:14:00.107193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:13.161 { 00:19:13.161 "results": [ 00:19:13.161 { 00:19:13.161 "job": "raid_bdev1", 00:19:13.161 "core_mask": "0x1", 00:19:13.161 "workload": "randrw", 00:19:13.161 "percentage": 50, 00:19:13.161 "status": "finished", 00:19:13.161 "queue_depth": 2, 00:19:13.161 "io_size": 3145728, 00:19:13.161 "runtime": 9.333712, 00:19:13.161 "iops": 80.6752983164683, 00:19:13.161 "mibps": 242.02589494940491, 00:19:13.161 "io_failed": 0, 00:19:13.161 "io_timeout": 0, 00:19:13.161 "avg_latency_us": 17572.246731860436, 00:19:13.161 "min_latency_us": 296.0290909090909, 00:19:13.161 "max_latency_us": 119156.36363636363 00:19:13.161 } 00:19:13.161 ], 00:19:13.161 "core_count": 1 00:19:13.161 } 00:19:13.161 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.161 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:13.162 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:13.729 /dev/nbd0 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:13.729 1+0 records in 00:19:13.729 1+0 records out 00:19:13.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528616 s, 7.7 MB/s 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:13.729 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:13.988 /dev/nbd1 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:13.989 1+0 records in 00:19:13.989 1+0 records out 00:19:13.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504603 s, 8.1 MB/s 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:13.989 13:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:14.248 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:14.248 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:14.248 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:14.248 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:14.248 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:14.248 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.248 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.507 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76973 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76973 ']' 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76973 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76973 00:19:14.765 killing process with pid 76973 00:19:14.765 Received shutdown signal, test time was about 10.994398 seconds 00:19:14.765 00:19:14.765 Latency(us) 00:19:14.765 [2024-12-06T13:14:01.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.765 [2024-12-06T13:14:01.781Z] =================================================================================================================== 00:19:14.765 [2024-12-06T13:14:01.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76973' 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76973 00:19:14.765 [2024-12-06 13:14:01.746780] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:14.765 13:14:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76973 00:19:15.024 [2024-12-06 13:14:01.961744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:16.399 ************************************ 00:19:16.399 END TEST raid_rebuild_test_io 00:19:16.399 ************************************ 00:19:16.399 13:14:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:16.399 00:19:16.399 real 0m14.416s 00:19:16.399 user 0m18.477s 00:19:16.399 sys 0m1.610s 00:19:16.399 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.399 13:14:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:16.399 13:14:03 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:19:16.399 13:14:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:16.399 13:14:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.399 13:14:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:16.399 ************************************ 00:19:16.399 START TEST raid_rebuild_test_sb_io 00:19:16.399 ************************************ 00:19:16.399 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:19:16.399 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:16.399 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:16.399 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:16.399 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:16.399 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:16.399 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:16.399 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77378 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77378 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77378 ']' 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.400 13:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:16.400 [2024-12-06 13:14:03.296377] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:19:16.400 [2024-12-06 13:14:03.296863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77378 ] 00:19:16.400 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:16.400 Zero copy mechanism will not be used. 00:19:16.658 [2024-12-06 13:14:03.474303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.658 [2024-12-06 13:14:03.610721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.966 [2024-12-06 13:14:03.824227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:16.966 [2024-12-06 13:14:03.824660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.232 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.232 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:19:17.232 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:17.232 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:17.232 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.232 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.502 BaseBdev1_malloc 00:19:17.502 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.502 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:17.502 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.502 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.502 [2024-12-06 13:14:04.291058] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:17.502 [2024-12-06 13:14:04.291181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.502 [2024-12-06 13:14:04.291232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:17.502 [2024-12-06 13:14:04.291262] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.502 [2024-12-06 13:14:04.294458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.502 [2024-12-06 13:14:04.294716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:17.502 BaseBdev1 00:19:17.502 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.502 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:17.502 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:17.502 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.502 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.503 BaseBdev2_malloc 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.503 [2024-12-06 13:14:04.347599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:17.503 [2024-12-06 13:14:04.347837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.503 [2024-12-06 13:14:04.347894] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:17.503 [2024-12-06 13:14:04.347925] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.503 [2024-12-06 13:14:04.351255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.503 [2024-12-06 13:14:04.351493] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:17.503 BaseBdev2 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.503 spare_malloc 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.503 spare_delay 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.503 [2024-12-06 13:14:04.425151] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:17.503 [2024-12-06 13:14:04.425268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.503 [2024-12-06 13:14:04.425324] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:17.503 [2024-12-06 13:14:04.425353] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.503 [2024-12-06 13:14:04.428997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.503 [2024-12-06 13:14:04.429056] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:17.503 spare 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.503 [2024-12-06 13:14:04.437432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.503 [2024-12-06 13:14:04.440313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.503 [2024-12-06 13:14:04.440784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:17.503 [2024-12-06 13:14:04.440818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:17.503 [2024-12-06 13:14:04.441236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:17.503 [2024-12-06 13:14:04.441471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:17.503 [2024-12-06 13:14:04.441503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:17.503 [2024-12-06 13:14:04.441851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.503 "name": "raid_bdev1", 00:19:17.503 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:17.503 "strip_size_kb": 0, 00:19:17.503 "state": "online", 00:19:17.503 "raid_level": "raid1", 00:19:17.503 "superblock": true, 00:19:17.503 "num_base_bdevs": 2, 00:19:17.503 "num_base_bdevs_discovered": 2, 00:19:17.503 "num_base_bdevs_operational": 2, 00:19:17.503 "base_bdevs_list": [ 00:19:17.503 { 00:19:17.503 "name": "BaseBdev1", 00:19:17.503 "uuid": "d1ad8385-29bd-53ca-babf-4575f3eca27f", 00:19:17.503 "is_configured": true, 00:19:17.503 "data_offset": 2048, 00:19:17.503 "data_size": 63488 00:19:17.503 }, 00:19:17.503 { 00:19:17.503 "name": "BaseBdev2", 00:19:17.503 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:17.503 "is_configured": true, 00:19:17.503 "data_offset": 2048, 00:19:17.503 "data_size": 63488 00:19:17.503 } 00:19:17.503 ] 00:19:17.503 }' 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.503 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.069 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:18.069 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:18.069 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.069 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.069 [2024-12-06 13:14:04.954345] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:18.069 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.069 13:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.070 [2024-12-06 13:14:05.062036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.070 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.328 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.328 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.328 "name": "raid_bdev1", 00:19:18.328 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:18.328 "strip_size_kb": 0, 00:19:18.328 "state": "online", 00:19:18.328 "raid_level": "raid1", 00:19:18.328 "superblock": true, 00:19:18.328 "num_base_bdevs": 2, 00:19:18.328 "num_base_bdevs_discovered": 1, 00:19:18.328 "num_base_bdevs_operational": 1, 00:19:18.328 "base_bdevs_list": [ 00:19:18.328 { 00:19:18.328 "name": null, 00:19:18.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.328 "is_configured": false, 00:19:18.328 "data_offset": 0, 00:19:18.328 "data_size": 63488 00:19:18.328 }, 00:19:18.328 { 00:19:18.328 "name": "BaseBdev2", 00:19:18.328 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:18.328 "is_configured": true, 00:19:18.328 "data_offset": 2048, 00:19:18.328 "data_size": 63488 00:19:18.328 } 00:19:18.328 ] 00:19:18.328 }' 00:19:18.328 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.328 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.328 [2024-12-06 13:14:05.227160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:18.328 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:18.328 Zero copy mechanism will not be used. 00:19:18.328 Running I/O for 60 seconds... 00:19:18.585 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:18.585 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.585 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.585 [2024-12-06 13:14:05.592607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.844 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.844 13:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:18.844 [2024-12-06 13:14:05.655830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:18.844 [2024-12-06 13:14:05.658783] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:18.844 [2024-12-06 13:14:05.793829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:18.844 [2024-12-06 13:14:05.794635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:19.103 [2024-12-06 13:14:06.014298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:19.103 [2024-12-06 13:14:06.015039] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:19.362 142.00 IOPS, 426.00 MiB/s [2024-12-06T13:14:06.378Z] [2024-12-06 13:14:06.357721] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:19.620 [2024-12-06 13:14:06.570808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:19.620 [2024-12-06 13:14:06.571675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.880 "name": "raid_bdev1", 00:19:19.880 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:19.880 "strip_size_kb": 0, 00:19:19.880 "state": "online", 00:19:19.880 "raid_level": "raid1", 00:19:19.880 "superblock": true, 00:19:19.880 "num_base_bdevs": 2, 00:19:19.880 "num_base_bdevs_discovered": 2, 00:19:19.880 "num_base_bdevs_operational": 2, 00:19:19.880 "process": { 00:19:19.880 "type": "rebuild", 00:19:19.880 "target": "spare", 00:19:19.880 "progress": { 00:19:19.880 "blocks": 10240, 00:19:19.880 "percent": 16 00:19:19.880 } 00:19:19.880 }, 00:19:19.880 "base_bdevs_list": [ 00:19:19.880 { 00:19:19.880 "name": "spare", 00:19:19.880 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:19.880 "is_configured": true, 00:19:19.880 "data_offset": 2048, 00:19:19.880 "data_size": 63488 00:19:19.880 }, 00:19:19.880 { 00:19:19.880 "name": "BaseBdev2", 00:19:19.880 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:19.880 "is_configured": true, 00:19:19.880 "data_offset": 2048, 00:19:19.880 "data_size": 63488 00:19:19.880 } 00:19:19.880 ] 00:19:19.880 }' 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.880 13:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.880 [2024-12-06 13:14:06.797248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.139 [2024-12-06 13:14:06.942161] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:20.139 [2024-12-06 13:14:06.946119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.139 [2024-12-06 13:14:06.946193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.139 [2024-12-06 13:14:06.946215] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:20.139 [2024-12-06 13:14:07.001178] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.139 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.139 "name": "raid_bdev1", 00:19:20.139 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:20.139 "strip_size_kb": 0, 00:19:20.139 "state": "online", 00:19:20.139 "raid_level": "raid1", 00:19:20.139 "superblock": true, 00:19:20.139 "num_base_bdevs": 2, 00:19:20.139 "num_base_bdevs_discovered": 1, 00:19:20.139 "num_base_bdevs_operational": 1, 00:19:20.139 "base_bdevs_list": [ 00:19:20.139 { 00:19:20.139 "name": null, 00:19:20.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.139 "is_configured": false, 00:19:20.139 "data_offset": 0, 00:19:20.139 "data_size": 63488 00:19:20.139 }, 00:19:20.139 { 00:19:20.139 "name": "BaseBdev2", 00:19:20.139 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:20.140 "is_configured": true, 00:19:20.140 "data_offset": 2048, 00:19:20.140 "data_size": 63488 00:19:20.140 } 00:19:20.140 ] 00:19:20.140 }' 00:19:20.140 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.140 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.657 103.00 IOPS, 309.00 MiB/s [2024-12-06T13:14:07.673Z] 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.657 "name": "raid_bdev1", 00:19:20.657 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:20.657 "strip_size_kb": 0, 00:19:20.657 "state": "online", 00:19:20.657 "raid_level": "raid1", 00:19:20.657 "superblock": true, 00:19:20.657 "num_base_bdevs": 2, 00:19:20.657 "num_base_bdevs_discovered": 1, 00:19:20.657 "num_base_bdevs_operational": 1, 00:19:20.657 "base_bdevs_list": [ 00:19:20.657 { 00:19:20.657 "name": null, 00:19:20.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.657 "is_configured": false, 00:19:20.657 "data_offset": 0, 00:19:20.657 "data_size": 63488 00:19:20.657 }, 00:19:20.657 { 00:19:20.657 "name": "BaseBdev2", 00:19:20.657 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:20.657 "is_configured": true, 00:19:20.657 "data_offset": 2048, 00:19:20.657 "data_size": 63488 00:19:20.657 } 00:19:20.657 ] 00:19:20.657 }' 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:20.657 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.917 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:20.917 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:20.917 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.917 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.917 [2024-12-06 13:14:07.738793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.917 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.917 13:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:20.917 [2024-12-06 13:14:07.808840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:20.917 [2024-12-06 13:14:07.811846] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:20.917 [2024-12-06 13:14:07.914218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:20.917 [2024-12-06 13:14:07.915208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:21.202 [2024-12-06 13:14:08.128421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:21.202 [2024-12-06 13:14:08.129287] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:21.722 119.00 IOPS, 357.00 MiB/s [2024-12-06T13:14:08.738Z] [2024-12-06 13:14:08.490081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:21.722 [2024-12-06 13:14:08.716418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:21.722 [2024-12-06 13:14:08.717324] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.982 "name": "raid_bdev1", 00:19:21.982 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:21.982 "strip_size_kb": 0, 00:19:21.982 "state": "online", 00:19:21.982 "raid_level": "raid1", 00:19:21.982 "superblock": true, 00:19:21.982 "num_base_bdevs": 2, 00:19:21.982 "num_base_bdevs_discovered": 2, 00:19:21.982 "num_base_bdevs_operational": 2, 00:19:21.982 "process": { 00:19:21.982 "type": "rebuild", 00:19:21.982 "target": "spare", 00:19:21.982 "progress": { 00:19:21.982 "blocks": 10240, 00:19:21.982 "percent": 16 00:19:21.982 } 00:19:21.982 }, 00:19:21.982 "base_bdevs_list": [ 00:19:21.982 { 00:19:21.982 "name": "spare", 00:19:21.982 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:21.982 "is_configured": true, 00:19:21.982 "data_offset": 2048, 00:19:21.982 "data_size": 63488 00:19:21.982 }, 00:19:21.982 { 00:19:21.982 "name": "BaseBdev2", 00:19:21.982 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:21.982 "is_configured": true, 00:19:21.982 "data_offset": 2048, 00:19:21.982 "data_size": 63488 00:19:21.982 } 00:19:21.982 ] 00:19:21.982 }' 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:21.982 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=459 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.982 13:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.241 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.241 "name": "raid_bdev1", 00:19:22.241 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:22.241 "strip_size_kb": 0, 00:19:22.241 "state": "online", 00:19:22.241 "raid_level": "raid1", 00:19:22.241 "superblock": true, 00:19:22.241 "num_base_bdevs": 2, 00:19:22.241 "num_base_bdevs_discovered": 2, 00:19:22.241 "num_base_bdevs_operational": 2, 00:19:22.241 "process": { 00:19:22.241 "type": "rebuild", 00:19:22.241 "target": "spare", 00:19:22.241 "progress": { 00:19:22.241 "blocks": 12288, 00:19:22.241 "percent": 19 00:19:22.241 } 00:19:22.241 }, 00:19:22.241 "base_bdevs_list": [ 00:19:22.241 { 00:19:22.241 "name": "spare", 00:19:22.241 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:22.241 "is_configured": true, 00:19:22.241 "data_offset": 2048, 00:19:22.241 "data_size": 63488 00:19:22.241 }, 00:19:22.241 { 00:19:22.241 "name": "BaseBdev2", 00:19:22.241 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:22.241 "is_configured": true, 00:19:22.241 "data_offset": 2048, 00:19:22.241 "data_size": 63488 00:19:22.241 } 00:19:22.241 ] 00:19:22.241 }' 00:19:22.241 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.241 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.241 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.241 [2024-12-06 13:14:09.084054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:22.241 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.241 13:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:22.499 110.00 IOPS, 330.00 MiB/s [2024-12-06T13:14:09.515Z] [2024-12-06 13:14:09.318207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:22.774 [2024-12-06 13:14:09.652161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:22.774 [2024-12-06 13:14:09.653133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:23.033 [2024-12-06 13:14:09.899541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.292 "name": "raid_bdev1", 00:19:23.292 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:23.292 "strip_size_kb": 0, 00:19:23.292 "state": "online", 00:19:23.292 "raid_level": "raid1", 00:19:23.292 "superblock": true, 00:19:23.292 "num_base_bdevs": 2, 00:19:23.292 "num_base_bdevs_discovered": 2, 00:19:23.292 "num_base_bdevs_operational": 2, 00:19:23.292 "process": { 00:19:23.292 "type": "rebuild", 00:19:23.292 "target": "spare", 00:19:23.292 "progress": { 00:19:23.292 "blocks": 24576, 00:19:23.292 "percent": 38 00:19:23.292 } 00:19:23.292 }, 00:19:23.292 "base_bdevs_list": [ 00:19:23.292 { 00:19:23.292 "name": "spare", 00:19:23.292 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:23.292 "is_configured": true, 00:19:23.292 "data_offset": 2048, 00:19:23.292 "data_size": 63488 00:19:23.292 }, 00:19:23.292 { 00:19:23.292 "name": "BaseBdev2", 00:19:23.292 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:23.292 "is_configured": true, 00:19:23.292 "data_offset": 2048, 00:19:23.292 "data_size": 63488 00:19:23.292 } 00:19:23.292 ] 00:19:23.292 }' 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.292 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:23.293 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.293 [2024-12-06 13:14:10.226288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:23.293 [2024-12-06 13:14:10.227393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:23.293 96.00 IOPS, 288.00 MiB/s [2024-12-06T13:14:10.309Z] 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:23.293 13:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:23.551 [2024-12-06 13:14:10.468812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:23.809 [2024-12-06 13:14:10.784251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:24.068 [2024-12-06 13:14:11.005285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:24.327 88.50 IOPS, 265.50 MiB/s [2024-12-06T13:14:11.343Z] 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:24.327 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.327 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.327 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.327 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.327 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.327 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.327 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.327 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.327 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.327 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.586 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.586 "name": "raid_bdev1", 00:19:24.586 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:24.586 "strip_size_kb": 0, 00:19:24.586 "state": "online", 00:19:24.586 "raid_level": "raid1", 00:19:24.586 "superblock": true, 00:19:24.586 "num_base_bdevs": 2, 00:19:24.586 "num_base_bdevs_discovered": 2, 00:19:24.586 "num_base_bdevs_operational": 2, 00:19:24.586 "process": { 00:19:24.586 "type": "rebuild", 00:19:24.586 "target": "spare", 00:19:24.586 "progress": { 00:19:24.586 "blocks": 36864, 00:19:24.586 "percent": 58 00:19:24.586 } 00:19:24.586 }, 00:19:24.586 "base_bdevs_list": [ 00:19:24.586 { 00:19:24.586 "name": "spare", 00:19:24.586 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:24.586 "is_configured": true, 00:19:24.586 "data_offset": 2048, 00:19:24.586 "data_size": 63488 00:19:24.586 }, 00:19:24.586 { 00:19:24.586 "name": "BaseBdev2", 00:19:24.586 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:24.586 "is_configured": true, 00:19:24.586 "data_offset": 2048, 00:19:24.586 "data_size": 63488 00:19:24.586 } 00:19:24.586 ] 00:19:24.586 }' 00:19:24.586 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.586 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.586 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.586 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.586 13:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:24.846 [2024-12-06 13:14:11.786718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:25.415 [2024-12-06 13:14:12.139642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:19:25.674 80.14 IOPS, 240.43 MiB/s [2024-12-06T13:14:12.690Z] 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.674 "name": "raid_bdev1", 00:19:25.674 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:25.674 "strip_size_kb": 0, 00:19:25.674 "state": "online", 00:19:25.674 "raid_level": "raid1", 00:19:25.674 "superblock": true, 00:19:25.674 "num_base_bdevs": 2, 00:19:25.674 "num_base_bdevs_discovered": 2, 00:19:25.674 "num_base_bdevs_operational": 2, 00:19:25.674 "process": { 00:19:25.674 "type": "rebuild", 00:19:25.674 "target": "spare", 00:19:25.674 "progress": { 00:19:25.674 "blocks": 57344, 00:19:25.674 "percent": 90 00:19:25.674 } 00:19:25.674 }, 00:19:25.674 "base_bdevs_list": [ 00:19:25.674 { 00:19:25.674 "name": "spare", 00:19:25.674 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:25.674 "is_configured": true, 00:19:25.674 "data_offset": 2048, 00:19:25.674 "data_size": 63488 00:19:25.674 }, 00:19:25.674 { 00:19:25.674 "name": "BaseBdev2", 00:19:25.674 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:25.674 "is_configured": true, 00:19:25.674 "data_offset": 2048, 00:19:25.674 "data_size": 63488 00:19:25.674 } 00:19:25.674 ] 00:19:25.674 }' 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.674 13:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:25.933 [2024-12-06 13:14:12.812991] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:25.933 [2024-12-06 13:14:12.912973] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:25.933 [2024-12-06 13:14:12.916430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.757 74.12 IOPS, 222.38 MiB/s [2024-12-06T13:14:13.773Z] 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:26.757 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.757 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.757 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.757 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.757 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.757 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.757 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.757 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.757 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:26.758 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.758 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.758 "name": "raid_bdev1", 00:19:26.758 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:26.758 "strip_size_kb": 0, 00:19:26.758 "state": "online", 00:19:26.758 "raid_level": "raid1", 00:19:26.758 "superblock": true, 00:19:26.758 "num_base_bdevs": 2, 00:19:26.758 "num_base_bdevs_discovered": 2, 00:19:26.758 "num_base_bdevs_operational": 2, 00:19:26.758 "base_bdevs_list": [ 00:19:26.758 { 00:19:26.758 "name": "spare", 00:19:26.758 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:26.758 "is_configured": true, 00:19:26.758 "data_offset": 2048, 00:19:26.758 "data_size": 63488 00:19:26.758 }, 00:19:26.758 { 00:19:26.758 "name": "BaseBdev2", 00:19:26.758 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:26.758 "is_configured": true, 00:19:26.758 "data_offset": 2048, 00:19:26.758 "data_size": 63488 00:19:26.758 } 00:19:26.758 ] 00:19:26.758 }' 00:19:26.758 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.758 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:26.758 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.017 "name": "raid_bdev1", 00:19:27.017 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:27.017 "strip_size_kb": 0, 00:19:27.017 "state": "online", 00:19:27.017 "raid_level": "raid1", 00:19:27.017 "superblock": true, 00:19:27.017 "num_base_bdevs": 2, 00:19:27.017 "num_base_bdevs_discovered": 2, 00:19:27.017 "num_base_bdevs_operational": 2, 00:19:27.017 "base_bdevs_list": [ 00:19:27.017 { 00:19:27.017 "name": "spare", 00:19:27.017 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:27.017 "is_configured": true, 00:19:27.017 "data_offset": 2048, 00:19:27.017 "data_size": 63488 00:19:27.017 }, 00:19:27.017 { 00:19:27.017 "name": "BaseBdev2", 00:19:27.017 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:27.017 "is_configured": true, 00:19:27.017 "data_offset": 2048, 00:19:27.017 "data_size": 63488 00:19:27.017 } 00:19:27.017 ] 00:19:27.017 }' 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.017 13:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.017 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.017 "name": "raid_bdev1", 00:19:27.017 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:27.017 "strip_size_kb": 0, 00:19:27.017 "state": "online", 00:19:27.017 "raid_level": "raid1", 00:19:27.017 "superblock": true, 00:19:27.017 "num_base_bdevs": 2, 00:19:27.017 "num_base_bdevs_discovered": 2, 00:19:27.017 "num_base_bdevs_operational": 2, 00:19:27.017 "base_bdevs_list": [ 00:19:27.017 { 00:19:27.017 "name": "spare", 00:19:27.017 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:27.017 "is_configured": true, 00:19:27.017 "data_offset": 2048, 00:19:27.017 "data_size": 63488 00:19:27.017 }, 00:19:27.017 { 00:19:27.017 "name": "BaseBdev2", 00:19:27.017 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:27.017 "is_configured": true, 00:19:27.017 "data_offset": 2048, 00:19:27.017 "data_size": 63488 00:19:27.017 } 00:19:27.017 ] 00:19:27.017 }' 00:19:27.017 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.017 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.533 70.33 IOPS, 211.00 MiB/s [2024-12-06T13:14:14.549Z] 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:27.533 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.533 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.533 [2024-12-06 13:14:14.488877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:27.533 [2024-12-06 13:14:14.489289] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.792 00:19:27.792 Latency(us) 00:19:27.792 [2024-12-06T13:14:14.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.792 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:27.792 raid_bdev1 : 9.35 68.95 206.86 0.00 0.00 19585.40 290.44 121539.49 00:19:27.792 [2024-12-06T13:14:14.808Z] =================================================================================================================== 00:19:27.792 [2024-12-06T13:14:14.808Z] Total : 68.95 206.86 0.00 0.00 19585.40 290.44 121539.49 00:19:27.792 { 00:19:27.792 "results": [ 00:19:27.792 { 00:19:27.792 "job": "raid_bdev1", 00:19:27.792 "core_mask": "0x1", 00:19:27.792 "workload": "randrw", 00:19:27.792 "percentage": 50, 00:19:27.792 "status": "finished", 00:19:27.792 "queue_depth": 2, 00:19:27.792 "io_size": 3145728, 00:19:27.792 "runtime": 9.354283, 00:19:27.792 "iops": 68.95237187072489, 00:19:27.792 "mibps": 206.85711561217465, 00:19:27.792 "io_failed": 0, 00:19:27.792 "io_timeout": 0, 00:19:27.792 "avg_latency_us": 19585.396363636362, 00:19:27.792 "min_latency_us": 290.44363636363636, 00:19:27.792 "max_latency_us": 121539.4909090909 00:19:27.792 } 00:19:27.792 ], 00:19:27.792 "core_count": 1 00:19:27.792 } 00:19:27.792 [2024-12-06 13:14:14.606820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.792 [2024-12-06 13:14:14.606922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.792 [2024-12-06 13:14:14.607053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.792 [2024-12-06 13:14:14.607071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:27.792 13:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:28.051 /dev/nbd0 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.051 1+0 records in 00:19:28.051 1+0 records out 00:19:28.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742965 s, 5.5 MB/s 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:28.051 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:28.317 /dev/nbd1 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.582 1+0 records in 00:19:28.582 1+0 records out 00:19:28.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387299 s, 10.6 MB/s 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.582 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.150 13:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.409 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.409 [2024-12-06 13:14:16.266892] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:29.409 [2024-12-06 13:14:16.266958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.409 [2024-12-06 13:14:16.266995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:29.409 [2024-12-06 13:14:16.267012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.409 [2024-12-06 13:14:16.269956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.409 [2024-12-06 13:14:16.270146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:29.410 [2024-12-06 13:14:16.270309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:29.410 [2024-12-06 13:14:16.270377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:29.410 [2024-12-06 13:14:16.270584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:29.410 spare 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.410 [2024-12-06 13:14:16.370713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:29.410 [2024-12-06 13:14:16.370770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:29.410 [2024-12-06 13:14:16.371154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:19:29.410 [2024-12-06 13:14:16.371397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:29.410 [2024-12-06 13:14:16.371416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:29.410 [2024-12-06 13:14:16.371700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.410 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.668 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.668 "name": "raid_bdev1", 00:19:29.668 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:29.668 "strip_size_kb": 0, 00:19:29.668 "state": "online", 00:19:29.668 "raid_level": "raid1", 00:19:29.668 "superblock": true, 00:19:29.668 "num_base_bdevs": 2, 00:19:29.668 "num_base_bdevs_discovered": 2, 00:19:29.668 "num_base_bdevs_operational": 2, 00:19:29.668 "base_bdevs_list": [ 00:19:29.668 { 00:19:29.668 "name": "spare", 00:19:29.668 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:29.668 "is_configured": true, 00:19:29.668 "data_offset": 2048, 00:19:29.668 "data_size": 63488 00:19:29.668 }, 00:19:29.668 { 00:19:29.668 "name": "BaseBdev2", 00:19:29.668 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:29.668 "is_configured": true, 00:19:29.668 "data_offset": 2048, 00:19:29.668 "data_size": 63488 00:19:29.668 } 00:19:29.668 ] 00:19:29.668 }' 00:19:29.668 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.668 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.926 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.926 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.926 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:29.926 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:29.926 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.926 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.926 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.926 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.926 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.926 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.185 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.185 "name": "raid_bdev1", 00:19:30.185 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:30.185 "strip_size_kb": 0, 00:19:30.185 "state": "online", 00:19:30.185 "raid_level": "raid1", 00:19:30.185 "superblock": true, 00:19:30.185 "num_base_bdevs": 2, 00:19:30.185 "num_base_bdevs_discovered": 2, 00:19:30.185 "num_base_bdevs_operational": 2, 00:19:30.185 "base_bdevs_list": [ 00:19:30.185 { 00:19:30.185 "name": "spare", 00:19:30.185 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:30.185 "is_configured": true, 00:19:30.185 "data_offset": 2048, 00:19:30.185 "data_size": 63488 00:19:30.185 }, 00:19:30.185 { 00:19:30.185 "name": "BaseBdev2", 00:19:30.185 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:30.185 "is_configured": true, 00:19:30.185 "data_offset": 2048, 00:19:30.185 "data_size": 63488 00:19:30.185 } 00:19:30.185 ] 00:19:30.185 }' 00:19:30.185 13:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.185 [2024-12-06 13:14:17.120025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.185 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.186 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.186 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.186 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.186 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.186 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.186 "name": "raid_bdev1", 00:19:30.186 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:30.186 "strip_size_kb": 0, 00:19:30.186 "state": "online", 00:19:30.186 "raid_level": "raid1", 00:19:30.186 "superblock": true, 00:19:30.186 "num_base_bdevs": 2, 00:19:30.186 "num_base_bdevs_discovered": 1, 00:19:30.186 "num_base_bdevs_operational": 1, 00:19:30.186 "base_bdevs_list": [ 00:19:30.186 { 00:19:30.186 "name": null, 00:19:30.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.186 "is_configured": false, 00:19:30.186 "data_offset": 0, 00:19:30.186 "data_size": 63488 00:19:30.186 }, 00:19:30.186 { 00:19:30.186 "name": "BaseBdev2", 00:19:30.186 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:30.186 "is_configured": true, 00:19:30.186 "data_offset": 2048, 00:19:30.186 "data_size": 63488 00:19:30.186 } 00:19:30.186 ] 00:19:30.186 }' 00:19:30.186 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.186 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.754 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:30.754 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.754 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.754 [2024-12-06 13:14:17.672267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:30.754 [2024-12-06 13:14:17.672714] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:30.754 [2024-12-06 13:14:17.672753] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:30.754 [2024-12-06 13:14:17.672803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:30.754 [2024-12-06 13:14:17.689214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:19:30.754 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.754 13:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:30.754 [2024-12-06 13:14:17.691775] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:31.690 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.690 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.690 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:31.690 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:31.690 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.690 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.690 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.690 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.690 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.948 "name": "raid_bdev1", 00:19:31.948 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:31.948 "strip_size_kb": 0, 00:19:31.948 "state": "online", 00:19:31.948 "raid_level": "raid1", 00:19:31.948 "superblock": true, 00:19:31.948 "num_base_bdevs": 2, 00:19:31.948 "num_base_bdevs_discovered": 2, 00:19:31.948 "num_base_bdevs_operational": 2, 00:19:31.948 "process": { 00:19:31.948 "type": "rebuild", 00:19:31.948 "target": "spare", 00:19:31.948 "progress": { 00:19:31.948 "blocks": 20480, 00:19:31.948 "percent": 32 00:19:31.948 } 00:19:31.948 }, 00:19:31.948 "base_bdevs_list": [ 00:19:31.948 { 00:19:31.948 "name": "spare", 00:19:31.948 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:31.948 "is_configured": true, 00:19:31.948 "data_offset": 2048, 00:19:31.948 "data_size": 63488 00:19:31.948 }, 00:19:31.948 { 00:19:31.948 "name": "BaseBdev2", 00:19:31.948 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:31.948 "is_configured": true, 00:19:31.948 "data_offset": 2048, 00:19:31.948 "data_size": 63488 00:19:31.948 } 00:19:31.948 ] 00:19:31.948 }' 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.948 [2024-12-06 13:14:18.845864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.948 [2024-12-06 13:14:18.901792] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:31.948 [2024-12-06 13:14:18.901909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.948 [2024-12-06 13:14:18.901936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.948 [2024-12-06 13:14:18.901955] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.948 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.206 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.206 "name": "raid_bdev1", 00:19:32.206 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:32.206 "strip_size_kb": 0, 00:19:32.206 "state": "online", 00:19:32.206 "raid_level": "raid1", 00:19:32.206 "superblock": true, 00:19:32.206 "num_base_bdevs": 2, 00:19:32.206 "num_base_bdevs_discovered": 1, 00:19:32.206 "num_base_bdevs_operational": 1, 00:19:32.206 "base_bdevs_list": [ 00:19:32.206 { 00:19:32.206 "name": null, 00:19:32.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.206 "is_configured": false, 00:19:32.206 "data_offset": 0, 00:19:32.206 "data_size": 63488 00:19:32.206 }, 00:19:32.206 { 00:19:32.206 "name": "BaseBdev2", 00:19:32.206 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:32.206 "is_configured": true, 00:19:32.206 "data_offset": 2048, 00:19:32.206 "data_size": 63488 00:19:32.206 } 00:19:32.206 ] 00:19:32.206 }' 00:19:32.206 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.206 13:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.464 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:32.464 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.464 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.464 [2024-12-06 13:14:19.469645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:32.464 [2024-12-06 13:14:19.469897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.464 [2024-12-06 13:14:19.469975] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:32.464 [2024-12-06 13:14:19.470236] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.464 [2024-12-06 13:14:19.470956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.464 [2024-12-06 13:14:19.471004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:32.464 [2024-12-06 13:14:19.471138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:32.464 [2024-12-06 13:14:19.471167] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:32.464 [2024-12-06 13:14:19.471182] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:32.464 [2024-12-06 13:14:19.471216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:32.722 [2024-12-06 13:14:19.487802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:19:32.722 spare 00:19:32.722 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.722 13:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:32.722 [2024-12-06 13:14:19.490345] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.659 "name": "raid_bdev1", 00:19:33.659 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:33.659 "strip_size_kb": 0, 00:19:33.659 "state": "online", 00:19:33.659 "raid_level": "raid1", 00:19:33.659 "superblock": true, 00:19:33.659 "num_base_bdevs": 2, 00:19:33.659 "num_base_bdevs_discovered": 2, 00:19:33.659 "num_base_bdevs_operational": 2, 00:19:33.659 "process": { 00:19:33.659 "type": "rebuild", 00:19:33.659 "target": "spare", 00:19:33.659 "progress": { 00:19:33.659 "blocks": 20480, 00:19:33.659 "percent": 32 00:19:33.659 } 00:19:33.659 }, 00:19:33.659 "base_bdevs_list": [ 00:19:33.659 { 00:19:33.659 "name": "spare", 00:19:33.659 "uuid": "af3690d9-c22e-5c44-8db3-b6f16f04a3b4", 00:19:33.659 "is_configured": true, 00:19:33.659 "data_offset": 2048, 00:19:33.659 "data_size": 63488 00:19:33.659 }, 00:19:33.659 { 00:19:33.659 "name": "BaseBdev2", 00:19:33.659 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:33.659 "is_configured": true, 00:19:33.659 "data_offset": 2048, 00:19:33.659 "data_size": 63488 00:19:33.659 } 00:19:33.659 ] 00:19:33.659 }' 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.659 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.659 [2024-12-06 13:14:20.660999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:33.918 [2024-12-06 13:14:20.703093] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:33.918 [2024-12-06 13:14:20.703602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.918 [2024-12-06 13:14:20.703835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:33.918 [2024-12-06 13:14:20.703897] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.918 "name": "raid_bdev1", 00:19:33.918 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:33.918 "strip_size_kb": 0, 00:19:33.918 "state": "online", 00:19:33.918 "raid_level": "raid1", 00:19:33.918 "superblock": true, 00:19:33.918 "num_base_bdevs": 2, 00:19:33.918 "num_base_bdevs_discovered": 1, 00:19:33.918 "num_base_bdevs_operational": 1, 00:19:33.918 "base_bdevs_list": [ 00:19:33.918 { 00:19:33.918 "name": null, 00:19:33.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.918 "is_configured": false, 00:19:33.918 "data_offset": 0, 00:19:33.918 "data_size": 63488 00:19:33.918 }, 00:19:33.918 { 00:19:33.918 "name": "BaseBdev2", 00:19:33.918 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:33.918 "is_configured": true, 00:19:33.918 "data_offset": 2048, 00:19:33.918 "data_size": 63488 00:19:33.918 } 00:19:33.918 ] 00:19:33.918 }' 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.918 13:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.485 "name": "raid_bdev1", 00:19:34.485 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:34.485 "strip_size_kb": 0, 00:19:34.485 "state": "online", 00:19:34.485 "raid_level": "raid1", 00:19:34.485 "superblock": true, 00:19:34.485 "num_base_bdevs": 2, 00:19:34.485 "num_base_bdevs_discovered": 1, 00:19:34.485 "num_base_bdevs_operational": 1, 00:19:34.485 "base_bdevs_list": [ 00:19:34.485 { 00:19:34.485 "name": null, 00:19:34.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.485 "is_configured": false, 00:19:34.485 "data_offset": 0, 00:19:34.485 "data_size": 63488 00:19:34.485 }, 00:19:34.485 { 00:19:34.485 "name": "BaseBdev2", 00:19:34.485 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:34.485 "is_configured": true, 00:19:34.485 "data_offset": 2048, 00:19:34.485 "data_size": 63488 00:19:34.485 } 00:19:34.485 ] 00:19:34.485 }' 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.485 [2024-12-06 13:14:21.414581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:34.485 [2024-12-06 13:14:21.414704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.485 [2024-12-06 13:14:21.414789] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:34.485 [2024-12-06 13:14:21.414830] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.485 [2024-12-06 13:14:21.415690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.485 [2024-12-06 13:14:21.415731] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:34.485 [2024-12-06 13:14:21.415918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:34.485 [2024-12-06 13:14:21.415954] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:34.485 [2024-12-06 13:14:21.415972] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:34.485 [2024-12-06 13:14:21.415990] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:34.485 BaseBdev1 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.485 13:14:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.421 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.679 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.679 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.679 "name": "raid_bdev1", 00:19:35.679 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:35.680 "strip_size_kb": 0, 00:19:35.680 "state": "online", 00:19:35.680 "raid_level": "raid1", 00:19:35.680 "superblock": true, 00:19:35.680 "num_base_bdevs": 2, 00:19:35.680 "num_base_bdevs_discovered": 1, 00:19:35.680 "num_base_bdevs_operational": 1, 00:19:35.680 "base_bdevs_list": [ 00:19:35.680 { 00:19:35.680 "name": null, 00:19:35.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.680 "is_configured": false, 00:19:35.680 "data_offset": 0, 00:19:35.680 "data_size": 63488 00:19:35.680 }, 00:19:35.680 { 00:19:35.680 "name": "BaseBdev2", 00:19:35.680 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:35.680 "is_configured": true, 00:19:35.680 "data_offset": 2048, 00:19:35.680 "data_size": 63488 00:19:35.680 } 00:19:35.680 ] 00:19:35.680 }' 00:19:35.680 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.680 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.247 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:36.247 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.247 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:36.247 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:36.247 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.247 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.247 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.247 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.247 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.247 13:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.247 "name": "raid_bdev1", 00:19:36.247 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:36.247 "strip_size_kb": 0, 00:19:36.247 "state": "online", 00:19:36.247 "raid_level": "raid1", 00:19:36.247 "superblock": true, 00:19:36.247 "num_base_bdevs": 2, 00:19:36.247 "num_base_bdevs_discovered": 1, 00:19:36.247 "num_base_bdevs_operational": 1, 00:19:36.247 "base_bdevs_list": [ 00:19:36.247 { 00:19:36.247 "name": null, 00:19:36.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.247 "is_configured": false, 00:19:36.247 "data_offset": 0, 00:19:36.247 "data_size": 63488 00:19:36.247 }, 00:19:36.247 { 00:19:36.247 "name": "BaseBdev2", 00:19:36.247 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:36.247 "is_configured": true, 00:19:36.247 "data_offset": 2048, 00:19:36.247 "data_size": 63488 00:19:36.247 } 00:19:36.247 ] 00:19:36.247 }' 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:36.247 [2024-12-06 13:14:23.135496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:36.247 [2024-12-06 13:14:23.135894] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:36.247 [2024-12-06 13:14:23.135926] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:36.247 request: 00:19:36.247 { 00:19:36.247 "base_bdev": "BaseBdev1", 00:19:36.247 "raid_bdev": "raid_bdev1", 00:19:36.247 "method": "bdev_raid_add_base_bdev", 00:19:36.247 "req_id": 1 00:19:36.247 } 00:19:36.247 Got JSON-RPC error response 00:19:36.247 response: 00:19:36.247 { 00:19:36.247 "code": -22, 00:19:36.247 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:36.247 } 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.247 13:14:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.186 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.452 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.452 "name": "raid_bdev1", 00:19:37.452 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:37.452 "strip_size_kb": 0, 00:19:37.452 "state": "online", 00:19:37.452 "raid_level": "raid1", 00:19:37.452 "superblock": true, 00:19:37.452 "num_base_bdevs": 2, 00:19:37.452 "num_base_bdevs_discovered": 1, 00:19:37.452 "num_base_bdevs_operational": 1, 00:19:37.452 "base_bdevs_list": [ 00:19:37.452 { 00:19:37.452 "name": null, 00:19:37.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.452 "is_configured": false, 00:19:37.452 "data_offset": 0, 00:19:37.452 "data_size": 63488 00:19:37.452 }, 00:19:37.452 { 00:19:37.452 "name": "BaseBdev2", 00:19:37.452 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:37.452 "is_configured": true, 00:19:37.452 "data_offset": 2048, 00:19:37.452 "data_size": 63488 00:19:37.452 } 00:19:37.452 ] 00:19:37.452 }' 00:19:37.452 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.452 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.710 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:37.710 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.710 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:37.710 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:37.710 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.710 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.710 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.710 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.710 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.710 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.968 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.968 "name": "raid_bdev1", 00:19:37.968 "uuid": "ab908e36-f2b0-4a4d-9dc8-6c21f82c3735", 00:19:37.968 "strip_size_kb": 0, 00:19:37.968 "state": "online", 00:19:37.968 "raid_level": "raid1", 00:19:37.968 "superblock": true, 00:19:37.969 "num_base_bdevs": 2, 00:19:37.969 "num_base_bdevs_discovered": 1, 00:19:37.969 "num_base_bdevs_operational": 1, 00:19:37.969 "base_bdevs_list": [ 00:19:37.969 { 00:19:37.969 "name": null, 00:19:37.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.969 "is_configured": false, 00:19:37.969 "data_offset": 0, 00:19:37.969 "data_size": 63488 00:19:37.969 }, 00:19:37.969 { 00:19:37.969 "name": "BaseBdev2", 00:19:37.969 "uuid": "25f64118-d596-52c4-a3b4-4edc6aba4ec0", 00:19:37.969 "is_configured": true, 00:19:37.969 "data_offset": 2048, 00:19:37.969 "data_size": 63488 00:19:37.969 } 00:19:37.969 ] 00:19:37.969 }' 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77378 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77378 ']' 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77378 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77378 00:19:37.969 killing process with pid 77378 00:19:37.969 Received shutdown signal, test time was about 19.632061 seconds 00:19:37.969 00:19:37.969 Latency(us) 00:19:37.969 [2024-12-06T13:14:24.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.969 [2024-12-06T13:14:24.985Z] =================================================================================================================== 00:19:37.969 [2024-12-06T13:14:24.985Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77378' 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77378 00:19:37.969 [2024-12-06 13:14:24.862609] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:37.969 13:14:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77378 00:19:37.969 [2024-12-06 13:14:24.862868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.969 [2024-12-06 13:14:24.862966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.969 [2024-12-06 13:14:24.862992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:38.227 [2024-12-06 13:14:25.083805] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:39.623 ************************************ 00:19:39.623 END TEST raid_rebuild_test_sb_io 00:19:39.623 ************************************ 00:19:39.623 00:19:39.623 real 0m23.097s 00:19:39.623 user 0m30.978s 00:19:39.623 sys 0m2.248s 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.623 13:14:26 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:19:39.623 13:14:26 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:19:39.623 13:14:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:39.623 13:14:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.623 13:14:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.623 ************************************ 00:19:39.623 START TEST raid_rebuild_test 00:19:39.623 ************************************ 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78105 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78105 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 78105 ']' 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.623 13:14:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.623 [2024-12-06 13:14:26.464081] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:19:39.623 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:39.623 Zero copy mechanism will not be used. 00:19:39.623 [2024-12-06 13:14:26.464653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78105 ] 00:19:39.882 [2024-12-06 13:14:26.642147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.882 [2024-12-06 13:14:26.791376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.141 [2024-12-06 13:14:27.015998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.141 [2024-12-06 13:14:27.016505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.709 BaseBdev1_malloc 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.709 [2024-12-06 13:14:27.516780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:40.709 [2024-12-06 13:14:27.516945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.709 [2024-12-06 13:14:27.516994] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:40.709 [2024-12-06 13:14:27.517034] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.709 [2024-12-06 13:14:27.520942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.709 [2024-12-06 13:14:27.521082] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:40.709 BaseBdev1 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.709 BaseBdev2_malloc 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.709 [2024-12-06 13:14:27.580589] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:40.709 [2024-12-06 13:14:27.580751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.709 [2024-12-06 13:14:27.580795] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:40.709 [2024-12-06 13:14:27.580819] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.709 [2024-12-06 13:14:27.584528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.709 [2024-12-06 13:14:27.584624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:40.709 BaseBdev2 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.709 BaseBdev3_malloc 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.709 [2024-12-06 13:14:27.652899] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:40.709 [2024-12-06 13:14:27.652998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.709 [2024-12-06 13:14:27.653033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:40.709 [2024-12-06 13:14:27.653052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.709 [2024-12-06 13:14:27.656132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.709 [2024-12-06 13:14:27.656185] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:40.709 BaseBdev3 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.709 BaseBdev4_malloc 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.709 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.709 [2024-12-06 13:14:27.714918] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:40.709 [2024-12-06 13:14:27.715007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.710 [2024-12-06 13:14:27.715039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:40.710 [2024-12-06 13:14:27.715059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.710 [2024-12-06 13:14:27.718166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.710 [2024-12-06 13:14:27.718233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:40.710 BaseBdev4 00:19:40.710 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.710 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:40.710 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.710 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.970 spare_malloc 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.970 spare_delay 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.970 [2024-12-06 13:14:27.779371] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:40.970 [2024-12-06 13:14:27.779449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.970 [2024-12-06 13:14:27.779501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:40.970 [2024-12-06 13:14:27.779524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.970 [2024-12-06 13:14:27.782442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.970 [2024-12-06 13:14:27.782534] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:40.970 spare 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.970 [2024-12-06 13:14:27.791505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:40.970 [2024-12-06 13:14:27.794148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:40.970 [2024-12-06 13:14:27.794235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:40.970 [2024-12-06 13:14:27.794317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:40.970 [2024-12-06 13:14:27.794438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:40.970 [2024-12-06 13:14:27.794461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:40.970 [2024-12-06 13:14:27.794862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:40.970 [2024-12-06 13:14:27.795273] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:40.970 [2024-12-06 13:14:27.795301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:40.970 [2024-12-06 13:14:27.795612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.970 "name": "raid_bdev1", 00:19:40.970 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:40.970 "strip_size_kb": 0, 00:19:40.970 "state": "online", 00:19:40.970 "raid_level": "raid1", 00:19:40.970 "superblock": false, 00:19:40.970 "num_base_bdevs": 4, 00:19:40.970 "num_base_bdevs_discovered": 4, 00:19:40.970 "num_base_bdevs_operational": 4, 00:19:40.970 "base_bdevs_list": [ 00:19:40.970 { 00:19:40.970 "name": "BaseBdev1", 00:19:40.970 "uuid": "53759de8-5068-5630-a03d-6e46ea5fcf1e", 00:19:40.970 "is_configured": true, 00:19:40.970 "data_offset": 0, 00:19:40.970 "data_size": 65536 00:19:40.970 }, 00:19:40.970 { 00:19:40.970 "name": "BaseBdev2", 00:19:40.970 "uuid": "72eeb035-7705-59ae-aad5-5217072fedf3", 00:19:40.970 "is_configured": true, 00:19:40.970 "data_offset": 0, 00:19:40.970 "data_size": 65536 00:19:40.970 }, 00:19:40.970 { 00:19:40.970 "name": "BaseBdev3", 00:19:40.970 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:40.970 "is_configured": true, 00:19:40.970 "data_offset": 0, 00:19:40.970 "data_size": 65536 00:19:40.970 }, 00:19:40.970 { 00:19:40.970 "name": "BaseBdev4", 00:19:40.970 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:40.970 "is_configured": true, 00:19:40.970 "data_offset": 0, 00:19:40.970 "data_size": 65536 00:19:40.970 } 00:19:40.970 ] 00:19:40.970 }' 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.970 13:14:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:41.537 [2024-12-06 13:14:28.304284] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.537 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:41.796 [2024-12-06 13:14:28.648013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:41.796 /dev/nbd0 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.796 1+0 records in 00:19:41.796 1+0 records out 00:19:41.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521456 s, 7.9 MB/s 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:41.796 13:14:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:51.769 65536+0 records in 00:19:51.769 65536+0 records out 00:19:51.769 33554432 bytes (34 MB, 32 MiB) copied, 8.579 s, 3.9 MB/s 00:19:51.769 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:51.769 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:51.769 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:51.770 [2024-12-06 13:14:37.598877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.770 [2024-12-06 13:14:37.631202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.770 "name": "raid_bdev1", 00:19:51.770 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:51.770 "strip_size_kb": 0, 00:19:51.770 "state": "online", 00:19:51.770 "raid_level": "raid1", 00:19:51.770 "superblock": false, 00:19:51.770 "num_base_bdevs": 4, 00:19:51.770 "num_base_bdevs_discovered": 3, 00:19:51.770 "num_base_bdevs_operational": 3, 00:19:51.770 "base_bdevs_list": [ 00:19:51.770 { 00:19:51.770 "name": null, 00:19:51.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.770 "is_configured": false, 00:19:51.770 "data_offset": 0, 00:19:51.770 "data_size": 65536 00:19:51.770 }, 00:19:51.770 { 00:19:51.770 "name": "BaseBdev2", 00:19:51.770 "uuid": "72eeb035-7705-59ae-aad5-5217072fedf3", 00:19:51.770 "is_configured": true, 00:19:51.770 "data_offset": 0, 00:19:51.770 "data_size": 65536 00:19:51.770 }, 00:19:51.770 { 00:19:51.770 "name": "BaseBdev3", 00:19:51.770 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:51.770 "is_configured": true, 00:19:51.770 "data_offset": 0, 00:19:51.770 "data_size": 65536 00:19:51.770 }, 00:19:51.770 { 00:19:51.770 "name": "BaseBdev4", 00:19:51.770 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:51.770 "is_configured": true, 00:19:51.770 "data_offset": 0, 00:19:51.770 "data_size": 65536 00:19:51.770 } 00:19:51.770 ] 00:19:51.770 }' 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.770 13:14:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.770 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.770 13:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.770 13:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.770 [2024-12-06 13:14:38.127313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.770 [2024-12-06 13:14:38.142034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:19:51.770 13:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.770 13:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:51.770 [2024-12-06 13:14:38.144945] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:52.336 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.336 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.336 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.336 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.336 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.336 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.336 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.336 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.336 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.336 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.336 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.336 "name": "raid_bdev1", 00:19:52.336 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:52.336 "strip_size_kb": 0, 00:19:52.336 "state": "online", 00:19:52.336 "raid_level": "raid1", 00:19:52.336 "superblock": false, 00:19:52.336 "num_base_bdevs": 4, 00:19:52.336 "num_base_bdevs_discovered": 4, 00:19:52.336 "num_base_bdevs_operational": 4, 00:19:52.337 "process": { 00:19:52.337 "type": "rebuild", 00:19:52.337 "target": "spare", 00:19:52.337 "progress": { 00:19:52.337 "blocks": 20480, 00:19:52.337 "percent": 31 00:19:52.337 } 00:19:52.337 }, 00:19:52.337 "base_bdevs_list": [ 00:19:52.337 { 00:19:52.337 "name": "spare", 00:19:52.337 "uuid": "a1ec0fdf-931b-5d0f-8b96-af8f4a9579b5", 00:19:52.337 "is_configured": true, 00:19:52.337 "data_offset": 0, 00:19:52.337 "data_size": 65536 00:19:52.337 }, 00:19:52.337 { 00:19:52.337 "name": "BaseBdev2", 00:19:52.337 "uuid": "72eeb035-7705-59ae-aad5-5217072fedf3", 00:19:52.337 "is_configured": true, 00:19:52.337 "data_offset": 0, 00:19:52.337 "data_size": 65536 00:19:52.337 }, 00:19:52.337 { 00:19:52.337 "name": "BaseBdev3", 00:19:52.337 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:52.337 "is_configured": true, 00:19:52.337 "data_offset": 0, 00:19:52.337 "data_size": 65536 00:19:52.337 }, 00:19:52.337 { 00:19:52.337 "name": "BaseBdev4", 00:19:52.337 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:52.337 "is_configured": true, 00:19:52.337 "data_offset": 0, 00:19:52.337 "data_size": 65536 00:19:52.337 } 00:19:52.337 ] 00:19:52.337 }' 00:19:52.337 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.337 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.337 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.337 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.337 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:52.337 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.337 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.337 [2024-12-06 13:14:39.310878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.595 [2024-12-06 13:14:39.356648] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:52.595 [2024-12-06 13:14:39.357041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.595 [2024-12-06 13:14:39.357214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.595 [2024-12-06 13:14:39.357275] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.595 "name": "raid_bdev1", 00:19:52.595 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:52.595 "strip_size_kb": 0, 00:19:52.595 "state": "online", 00:19:52.595 "raid_level": "raid1", 00:19:52.595 "superblock": false, 00:19:52.595 "num_base_bdevs": 4, 00:19:52.595 "num_base_bdevs_discovered": 3, 00:19:52.595 "num_base_bdevs_operational": 3, 00:19:52.595 "base_bdevs_list": [ 00:19:52.595 { 00:19:52.595 "name": null, 00:19:52.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.595 "is_configured": false, 00:19:52.595 "data_offset": 0, 00:19:52.595 "data_size": 65536 00:19:52.595 }, 00:19:52.595 { 00:19:52.595 "name": "BaseBdev2", 00:19:52.595 "uuid": "72eeb035-7705-59ae-aad5-5217072fedf3", 00:19:52.595 "is_configured": true, 00:19:52.595 "data_offset": 0, 00:19:52.595 "data_size": 65536 00:19:52.595 }, 00:19:52.595 { 00:19:52.595 "name": "BaseBdev3", 00:19:52.595 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:52.595 "is_configured": true, 00:19:52.595 "data_offset": 0, 00:19:52.595 "data_size": 65536 00:19:52.595 }, 00:19:52.595 { 00:19:52.595 "name": "BaseBdev4", 00:19:52.595 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:52.595 "is_configured": true, 00:19:52.595 "data_offset": 0, 00:19:52.595 "data_size": 65536 00:19:52.595 } 00:19:52.595 ] 00:19:52.595 }' 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.595 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.163 "name": "raid_bdev1", 00:19:53.163 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:53.163 "strip_size_kb": 0, 00:19:53.163 "state": "online", 00:19:53.163 "raid_level": "raid1", 00:19:53.163 "superblock": false, 00:19:53.163 "num_base_bdevs": 4, 00:19:53.163 "num_base_bdevs_discovered": 3, 00:19:53.163 "num_base_bdevs_operational": 3, 00:19:53.163 "base_bdevs_list": [ 00:19:53.163 { 00:19:53.163 "name": null, 00:19:53.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.163 "is_configured": false, 00:19:53.163 "data_offset": 0, 00:19:53.163 "data_size": 65536 00:19:53.163 }, 00:19:53.163 { 00:19:53.163 "name": "BaseBdev2", 00:19:53.163 "uuid": "72eeb035-7705-59ae-aad5-5217072fedf3", 00:19:53.163 "is_configured": true, 00:19:53.163 "data_offset": 0, 00:19:53.163 "data_size": 65536 00:19:53.163 }, 00:19:53.163 { 00:19:53.163 "name": "BaseBdev3", 00:19:53.163 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:53.163 "is_configured": true, 00:19:53.163 "data_offset": 0, 00:19:53.163 "data_size": 65536 00:19:53.163 }, 00:19:53.163 { 00:19:53.163 "name": "BaseBdev4", 00:19:53.163 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:53.163 "is_configured": true, 00:19:53.163 "data_offset": 0, 00:19:53.163 "data_size": 65536 00:19:53.163 } 00:19:53.163 ] 00:19:53.163 }' 00:19:53.163 13:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.163 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:53.163 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.163 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:53.163 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:53.163 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.163 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.163 [2024-12-06 13:14:40.067210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.163 [2024-12-06 13:14:40.081196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:19:53.163 13:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.163 13:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:53.163 [2024-12-06 13:14:40.083983] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.182 "name": "raid_bdev1", 00:19:54.182 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:54.182 "strip_size_kb": 0, 00:19:54.182 "state": "online", 00:19:54.182 "raid_level": "raid1", 00:19:54.182 "superblock": false, 00:19:54.182 "num_base_bdevs": 4, 00:19:54.182 "num_base_bdevs_discovered": 4, 00:19:54.182 "num_base_bdevs_operational": 4, 00:19:54.182 "process": { 00:19:54.182 "type": "rebuild", 00:19:54.182 "target": "spare", 00:19:54.182 "progress": { 00:19:54.182 "blocks": 20480, 00:19:54.182 "percent": 31 00:19:54.182 } 00:19:54.182 }, 00:19:54.182 "base_bdevs_list": [ 00:19:54.182 { 00:19:54.182 "name": "spare", 00:19:54.182 "uuid": "a1ec0fdf-931b-5d0f-8b96-af8f4a9579b5", 00:19:54.182 "is_configured": true, 00:19:54.182 "data_offset": 0, 00:19:54.182 "data_size": 65536 00:19:54.182 }, 00:19:54.182 { 00:19:54.182 "name": "BaseBdev2", 00:19:54.182 "uuid": "72eeb035-7705-59ae-aad5-5217072fedf3", 00:19:54.182 "is_configured": true, 00:19:54.182 "data_offset": 0, 00:19:54.182 "data_size": 65536 00:19:54.182 }, 00:19:54.182 { 00:19:54.182 "name": "BaseBdev3", 00:19:54.182 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:54.182 "is_configured": true, 00:19:54.182 "data_offset": 0, 00:19:54.182 "data_size": 65536 00:19:54.182 }, 00:19:54.182 { 00:19:54.182 "name": "BaseBdev4", 00:19:54.182 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:54.182 "is_configured": true, 00:19:54.182 "data_offset": 0, 00:19:54.182 "data_size": 65536 00:19:54.182 } 00:19:54.182 ] 00:19:54.182 }' 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.182 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.441 [2024-12-06 13:14:41.253855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:54.441 [2024-12-06 13:14:41.295603] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.441 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.442 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.442 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.442 "name": "raid_bdev1", 00:19:54.442 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:54.442 "strip_size_kb": 0, 00:19:54.442 "state": "online", 00:19:54.442 "raid_level": "raid1", 00:19:54.442 "superblock": false, 00:19:54.442 "num_base_bdevs": 4, 00:19:54.442 "num_base_bdevs_discovered": 3, 00:19:54.442 "num_base_bdevs_operational": 3, 00:19:54.442 "process": { 00:19:54.442 "type": "rebuild", 00:19:54.442 "target": "spare", 00:19:54.442 "progress": { 00:19:54.442 "blocks": 24576, 00:19:54.442 "percent": 37 00:19:54.442 } 00:19:54.442 }, 00:19:54.442 "base_bdevs_list": [ 00:19:54.442 { 00:19:54.442 "name": "spare", 00:19:54.442 "uuid": "a1ec0fdf-931b-5d0f-8b96-af8f4a9579b5", 00:19:54.442 "is_configured": true, 00:19:54.442 "data_offset": 0, 00:19:54.442 "data_size": 65536 00:19:54.442 }, 00:19:54.442 { 00:19:54.442 "name": null, 00:19:54.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.442 "is_configured": false, 00:19:54.442 "data_offset": 0, 00:19:54.442 "data_size": 65536 00:19:54.442 }, 00:19:54.442 { 00:19:54.442 "name": "BaseBdev3", 00:19:54.442 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:54.442 "is_configured": true, 00:19:54.442 "data_offset": 0, 00:19:54.442 "data_size": 65536 00:19:54.442 }, 00:19:54.442 { 00:19:54.442 "name": "BaseBdev4", 00:19:54.442 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:54.442 "is_configured": true, 00:19:54.442 "data_offset": 0, 00:19:54.442 "data_size": 65536 00:19:54.442 } 00:19:54.442 ] 00:19:54.442 }' 00:19:54.442 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.442 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.442 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=492 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.701 "name": "raid_bdev1", 00:19:54.701 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:54.701 "strip_size_kb": 0, 00:19:54.701 "state": "online", 00:19:54.701 "raid_level": "raid1", 00:19:54.701 "superblock": false, 00:19:54.701 "num_base_bdevs": 4, 00:19:54.701 "num_base_bdevs_discovered": 3, 00:19:54.701 "num_base_bdevs_operational": 3, 00:19:54.701 "process": { 00:19:54.701 "type": "rebuild", 00:19:54.701 "target": "spare", 00:19:54.701 "progress": { 00:19:54.701 "blocks": 26624, 00:19:54.701 "percent": 40 00:19:54.701 } 00:19:54.701 }, 00:19:54.701 "base_bdevs_list": [ 00:19:54.701 { 00:19:54.701 "name": "spare", 00:19:54.701 "uuid": "a1ec0fdf-931b-5d0f-8b96-af8f4a9579b5", 00:19:54.701 "is_configured": true, 00:19:54.701 "data_offset": 0, 00:19:54.701 "data_size": 65536 00:19:54.701 }, 00:19:54.701 { 00:19:54.701 "name": null, 00:19:54.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.701 "is_configured": false, 00:19:54.701 "data_offset": 0, 00:19:54.701 "data_size": 65536 00:19:54.701 }, 00:19:54.701 { 00:19:54.701 "name": "BaseBdev3", 00:19:54.701 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:54.701 "is_configured": true, 00:19:54.701 "data_offset": 0, 00:19:54.701 "data_size": 65536 00:19:54.701 }, 00:19:54.701 { 00:19:54.701 "name": "BaseBdev4", 00:19:54.701 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:54.701 "is_configured": true, 00:19:54.701 "data_offset": 0, 00:19:54.701 "data_size": 65536 00:19:54.701 } 00:19:54.701 ] 00:19:54.701 }' 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.701 13:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.680 "name": "raid_bdev1", 00:19:55.680 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:55.680 "strip_size_kb": 0, 00:19:55.680 "state": "online", 00:19:55.680 "raid_level": "raid1", 00:19:55.680 "superblock": false, 00:19:55.680 "num_base_bdevs": 4, 00:19:55.680 "num_base_bdevs_discovered": 3, 00:19:55.680 "num_base_bdevs_operational": 3, 00:19:55.680 "process": { 00:19:55.680 "type": "rebuild", 00:19:55.680 "target": "spare", 00:19:55.680 "progress": { 00:19:55.680 "blocks": 51200, 00:19:55.680 "percent": 78 00:19:55.680 } 00:19:55.680 }, 00:19:55.680 "base_bdevs_list": [ 00:19:55.680 { 00:19:55.680 "name": "spare", 00:19:55.680 "uuid": "a1ec0fdf-931b-5d0f-8b96-af8f4a9579b5", 00:19:55.680 "is_configured": true, 00:19:55.680 "data_offset": 0, 00:19:55.680 "data_size": 65536 00:19:55.680 }, 00:19:55.680 { 00:19:55.680 "name": null, 00:19:55.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.680 "is_configured": false, 00:19:55.680 "data_offset": 0, 00:19:55.680 "data_size": 65536 00:19:55.680 }, 00:19:55.680 { 00:19:55.680 "name": "BaseBdev3", 00:19:55.680 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:55.680 "is_configured": true, 00:19:55.680 "data_offset": 0, 00:19:55.680 "data_size": 65536 00:19:55.680 }, 00:19:55.680 { 00:19:55.680 "name": "BaseBdev4", 00:19:55.680 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:55.680 "is_configured": true, 00:19:55.680 "data_offset": 0, 00:19:55.680 "data_size": 65536 00:19:55.680 } 00:19:55.680 ] 00:19:55.680 }' 00:19:55.680 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.939 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.939 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.939 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.939 13:14:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:56.506 [2024-12-06 13:14:43.315361] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:56.506 [2024-12-06 13:14:43.315761] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:56.506 [2024-12-06 13:14:43.315862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.765 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:56.765 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.765 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.765 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.765 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.765 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.765 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.765 13:14:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.765 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.765 13:14:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.023 "name": "raid_bdev1", 00:19:57.023 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:57.023 "strip_size_kb": 0, 00:19:57.023 "state": "online", 00:19:57.023 "raid_level": "raid1", 00:19:57.023 "superblock": false, 00:19:57.023 "num_base_bdevs": 4, 00:19:57.023 "num_base_bdevs_discovered": 3, 00:19:57.023 "num_base_bdevs_operational": 3, 00:19:57.023 "base_bdevs_list": [ 00:19:57.023 { 00:19:57.023 "name": "spare", 00:19:57.023 "uuid": "a1ec0fdf-931b-5d0f-8b96-af8f4a9579b5", 00:19:57.023 "is_configured": true, 00:19:57.023 "data_offset": 0, 00:19:57.023 "data_size": 65536 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "name": null, 00:19:57.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.023 "is_configured": false, 00:19:57.023 "data_offset": 0, 00:19:57.023 "data_size": 65536 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "name": "BaseBdev3", 00:19:57.023 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:57.023 "is_configured": true, 00:19:57.023 "data_offset": 0, 00:19:57.023 "data_size": 65536 00:19:57.023 }, 00:19:57.023 { 00:19:57.023 "name": "BaseBdev4", 00:19:57.023 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:57.023 "is_configured": true, 00:19:57.023 "data_offset": 0, 00:19:57.023 "data_size": 65536 00:19:57.023 } 00:19:57.023 ] 00:19:57.023 }' 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.023 13:14:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.024 13:14:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.024 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.024 "name": "raid_bdev1", 00:19:57.024 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:57.024 "strip_size_kb": 0, 00:19:57.024 "state": "online", 00:19:57.024 "raid_level": "raid1", 00:19:57.024 "superblock": false, 00:19:57.024 "num_base_bdevs": 4, 00:19:57.024 "num_base_bdevs_discovered": 3, 00:19:57.024 "num_base_bdevs_operational": 3, 00:19:57.024 "base_bdevs_list": [ 00:19:57.024 { 00:19:57.024 "name": "spare", 00:19:57.024 "uuid": "a1ec0fdf-931b-5d0f-8b96-af8f4a9579b5", 00:19:57.024 "is_configured": true, 00:19:57.024 "data_offset": 0, 00:19:57.024 "data_size": 65536 00:19:57.024 }, 00:19:57.024 { 00:19:57.024 "name": null, 00:19:57.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.024 "is_configured": false, 00:19:57.024 "data_offset": 0, 00:19:57.024 "data_size": 65536 00:19:57.024 }, 00:19:57.024 { 00:19:57.024 "name": "BaseBdev3", 00:19:57.024 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:57.024 "is_configured": true, 00:19:57.024 "data_offset": 0, 00:19:57.024 "data_size": 65536 00:19:57.024 }, 00:19:57.024 { 00:19:57.024 "name": "BaseBdev4", 00:19:57.024 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:57.024 "is_configured": true, 00:19:57.024 "data_offset": 0, 00:19:57.024 "data_size": 65536 00:19:57.024 } 00:19:57.024 ] 00:19:57.024 }' 00:19:57.024 13:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.024 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.282 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.282 "name": "raid_bdev1", 00:19:57.282 "uuid": "92a52a65-7358-412c-b113-bd4a156cfe3e", 00:19:57.282 "strip_size_kb": 0, 00:19:57.282 "state": "online", 00:19:57.282 "raid_level": "raid1", 00:19:57.282 "superblock": false, 00:19:57.282 "num_base_bdevs": 4, 00:19:57.282 "num_base_bdevs_discovered": 3, 00:19:57.282 "num_base_bdevs_operational": 3, 00:19:57.282 "base_bdevs_list": [ 00:19:57.282 { 00:19:57.282 "name": "spare", 00:19:57.282 "uuid": "a1ec0fdf-931b-5d0f-8b96-af8f4a9579b5", 00:19:57.282 "is_configured": true, 00:19:57.282 "data_offset": 0, 00:19:57.282 "data_size": 65536 00:19:57.282 }, 00:19:57.282 { 00:19:57.282 "name": null, 00:19:57.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.282 "is_configured": false, 00:19:57.283 "data_offset": 0, 00:19:57.283 "data_size": 65536 00:19:57.283 }, 00:19:57.283 { 00:19:57.283 "name": "BaseBdev3", 00:19:57.283 "uuid": "d8c7774e-af88-55f8-b4d6-960e91eff70b", 00:19:57.283 "is_configured": true, 00:19:57.283 "data_offset": 0, 00:19:57.283 "data_size": 65536 00:19:57.283 }, 00:19:57.283 { 00:19:57.283 "name": "BaseBdev4", 00:19:57.283 "uuid": "55a654fe-907a-57a5-847d-1d3099eb1302", 00:19:57.283 "is_configured": true, 00:19:57.283 "data_offset": 0, 00:19:57.283 "data_size": 65536 00:19:57.283 } 00:19:57.283 ] 00:19:57.283 }' 00:19:57.283 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.283 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.849 [2024-12-06 13:14:44.653986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.849 [2024-12-06 13:14:44.654174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.849 [2024-12-06 13:14:44.654325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.849 [2024-12-06 13:14:44.654459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.849 [2024-12-06 13:14:44.654507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:57.849 13:14:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:58.107 /dev/nbd0 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.107 1+0 records in 00:19:58.107 1+0 records out 00:19:58.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412335 s, 9.9 MB/s 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:58.107 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:58.364 /dev/nbd1 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.364 1+0 records in 00:19:58.364 1+0 records out 00:19:58.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462735 s, 8.9 MB/s 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:58.364 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:58.621 13:14:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:58.621 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:58.621 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:58.621 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:58.621 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:58.621 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.621 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:58.898 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:58.898 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:58.898 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:58.898 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.898 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.898 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:58.898 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:58.898 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.898 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.898 13:14:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:59.463 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:59.463 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:59.463 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:59.463 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.463 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.463 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:59.463 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:59.463 13:14:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78105 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 78105 ']' 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 78105 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78105 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.464 killing process with pid 78105 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78105' 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 78105 00:19:59.464 Received shutdown signal, test time was about 60.000000 seconds 00:19:59.464 00:19:59.464 Latency(us) 00:19:59.464 [2024-12-06T13:14:46.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.464 [2024-12-06T13:14:46.480Z] =================================================================================================================== 00:19:59.464 [2024-12-06T13:14:46.480Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.464 [2024-12-06 13:14:46.251482] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:59.464 13:14:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 78105 00:19:59.754 [2024-12-06 13:14:46.727602] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:01.145 00:20:01.145 real 0m21.520s 00:20:01.145 user 0m24.132s 00:20:01.145 sys 0m3.668s 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.145 ************************************ 00:20:01.145 END TEST raid_rebuild_test 00:20:01.145 ************************************ 00:20:01.145 13:14:47 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:20:01.145 13:14:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:01.145 13:14:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.145 13:14:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:01.145 ************************************ 00:20:01.145 START TEST raid_rebuild_test_sb 00:20:01.145 ************************************ 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78585 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78585 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78585 ']' 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.145 13:14:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.145 [2024-12-06 13:14:48.045935] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:20:01.145 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:01.145 Zero copy mechanism will not be used. 00:20:01.145 [2024-12-06 13:14:48.046130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78585 ] 00:20:01.404 [2024-12-06 13:14:48.229410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.404 [2024-12-06 13:14:48.374121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.663 [2024-12-06 13:14:48.596755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.663 [2024-12-06 13:14:48.596834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:02.230 13:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.230 13:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:02.230 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:02.230 13:14:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:02.230 13:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.230 13:14:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.230 BaseBdev1_malloc 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.230 [2024-12-06 13:14:49.044383] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:02.230 [2024-12-06 13:14:49.044480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.230 [2024-12-06 13:14:49.044518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:02.230 [2024-12-06 13:14:49.044539] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.230 [2024-12-06 13:14:49.047573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.230 [2024-12-06 13:14:49.047625] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:02.230 BaseBdev1 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.230 BaseBdev2_malloc 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.230 [2024-12-06 13:14:49.104181] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:02.230 [2024-12-06 13:14:49.104295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.230 [2024-12-06 13:14:49.104332] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:02.230 [2024-12-06 13:14:49.104352] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.230 [2024-12-06 13:14:49.107560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.230 [2024-12-06 13:14:49.107615] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:02.230 BaseBdev2 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.230 BaseBdev3_malloc 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.230 [2024-12-06 13:14:49.178576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:02.230 [2024-12-06 13:14:49.178821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.230 [2024-12-06 13:14:49.178879] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:02.230 [2024-12-06 13:14:49.178901] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.230 [2024-12-06 13:14:49.181942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.230 [2024-12-06 13:14:49.182105] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:02.230 BaseBdev3 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:02.230 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.231 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.231 BaseBdev4_malloc 00:20:02.231 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.231 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:02.231 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.231 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.231 [2024-12-06 13:14:49.235007] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:02.231 [2024-12-06 13:14:49.235096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.231 [2024-12-06 13:14:49.235132] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:02.231 [2024-12-06 13:14:49.235152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.231 [2024-12-06 13:14:49.238144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.231 [2024-12-06 13:14:49.238198] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:02.231 BaseBdev4 00:20:02.231 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.231 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:02.231 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.231 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.489 spare_malloc 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.489 spare_delay 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.489 [2024-12-06 13:14:49.303508] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:02.489 [2024-12-06 13:14:49.303588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.489 [2024-12-06 13:14:49.303623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:02.489 [2024-12-06 13:14:49.303642] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.489 [2024-12-06 13:14:49.306803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.489 [2024-12-06 13:14:49.306861] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:02.489 spare 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.489 [2024-12-06 13:14:49.315618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:02.489 [2024-12-06 13:14:49.318274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:02.489 [2024-12-06 13:14:49.318368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:02.489 [2024-12-06 13:14:49.318452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:02.489 [2024-12-06 13:14:49.318766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:02.489 [2024-12-06 13:14:49.318801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:02.489 [2024-12-06 13:14:49.319162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:02.489 [2024-12-06 13:14:49.319433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:02.489 [2024-12-06 13:14:49.319451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:02.489 [2024-12-06 13:14:49.319752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.489 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.490 "name": "raid_bdev1", 00:20:02.490 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:02.490 "strip_size_kb": 0, 00:20:02.490 "state": "online", 00:20:02.490 "raid_level": "raid1", 00:20:02.490 "superblock": true, 00:20:02.490 "num_base_bdevs": 4, 00:20:02.490 "num_base_bdevs_discovered": 4, 00:20:02.490 "num_base_bdevs_operational": 4, 00:20:02.490 "base_bdevs_list": [ 00:20:02.490 { 00:20:02.490 "name": "BaseBdev1", 00:20:02.490 "uuid": "86f39e3e-d6e5-5fa9-bccc-2dbd3a81030c", 00:20:02.490 "is_configured": true, 00:20:02.490 "data_offset": 2048, 00:20:02.490 "data_size": 63488 00:20:02.490 }, 00:20:02.490 { 00:20:02.490 "name": "BaseBdev2", 00:20:02.490 "uuid": "7c1488c4-fce2-58bd-89b0-a406720f71bd", 00:20:02.490 "is_configured": true, 00:20:02.490 "data_offset": 2048, 00:20:02.490 "data_size": 63488 00:20:02.490 }, 00:20:02.490 { 00:20:02.490 "name": "BaseBdev3", 00:20:02.490 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:02.490 "is_configured": true, 00:20:02.490 "data_offset": 2048, 00:20:02.490 "data_size": 63488 00:20:02.490 }, 00:20:02.490 { 00:20:02.490 "name": "BaseBdev4", 00:20:02.490 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:02.490 "is_configured": true, 00:20:02.490 "data_offset": 2048, 00:20:02.490 "data_size": 63488 00:20:02.490 } 00:20:02.490 ] 00:20:02.490 }' 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.490 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.057 [2024-12-06 13:14:49.840324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.057 13:14:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:03.317 [2024-12-06 13:14:50.192048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:03.317 /dev/nbd0 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.317 1+0 records in 00:20:03.317 1+0 records out 00:20:03.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498866 s, 8.2 MB/s 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:03.317 13:14:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:13.359 63488+0 records in 00:20:13.359 63488+0 records out 00:20:13.359 32505856 bytes (33 MB, 31 MiB) copied, 8.40508 s, 3.9 MB/s 00:20:13.359 13:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:13.359 13:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:13.359 13:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:13.359 13:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:13.359 13:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:13.359 13:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:13.359 13:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:13.359 [2024-12-06 13:14:58.982256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.359 [2024-12-06 13:14:59.019533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.359 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.360 "name": "raid_bdev1", 00:20:13.360 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:13.360 "strip_size_kb": 0, 00:20:13.360 "state": "online", 00:20:13.360 "raid_level": "raid1", 00:20:13.360 "superblock": true, 00:20:13.360 "num_base_bdevs": 4, 00:20:13.360 "num_base_bdevs_discovered": 3, 00:20:13.360 "num_base_bdevs_operational": 3, 00:20:13.360 "base_bdevs_list": [ 00:20:13.360 { 00:20:13.360 "name": null, 00:20:13.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.360 "is_configured": false, 00:20:13.360 "data_offset": 0, 00:20:13.360 "data_size": 63488 00:20:13.360 }, 00:20:13.360 { 00:20:13.360 "name": "BaseBdev2", 00:20:13.360 "uuid": "7c1488c4-fce2-58bd-89b0-a406720f71bd", 00:20:13.360 "is_configured": true, 00:20:13.360 "data_offset": 2048, 00:20:13.360 "data_size": 63488 00:20:13.360 }, 00:20:13.360 { 00:20:13.360 "name": "BaseBdev3", 00:20:13.360 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:13.360 "is_configured": true, 00:20:13.360 "data_offset": 2048, 00:20:13.360 "data_size": 63488 00:20:13.360 }, 00:20:13.360 { 00:20:13.360 "name": "BaseBdev4", 00:20:13.360 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:13.360 "is_configured": true, 00:20:13.360 "data_offset": 2048, 00:20:13.360 "data_size": 63488 00:20:13.360 } 00:20:13.360 ] 00:20:13.360 }' 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.360 [2024-12-06 13:14:59.535684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:13.360 [2024-12-06 13:14:59.550769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.360 13:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:13.360 [2024-12-06 13:14:59.553599] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.618 "name": "raid_bdev1", 00:20:13.618 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:13.618 "strip_size_kb": 0, 00:20:13.618 "state": "online", 00:20:13.618 "raid_level": "raid1", 00:20:13.618 "superblock": true, 00:20:13.618 "num_base_bdevs": 4, 00:20:13.618 "num_base_bdevs_discovered": 4, 00:20:13.618 "num_base_bdevs_operational": 4, 00:20:13.618 "process": { 00:20:13.618 "type": "rebuild", 00:20:13.618 "target": "spare", 00:20:13.618 "progress": { 00:20:13.618 "blocks": 20480, 00:20:13.618 "percent": 32 00:20:13.618 } 00:20:13.618 }, 00:20:13.618 "base_bdevs_list": [ 00:20:13.618 { 00:20:13.618 "name": "spare", 00:20:13.618 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:13.618 "is_configured": true, 00:20:13.618 "data_offset": 2048, 00:20:13.618 "data_size": 63488 00:20:13.618 }, 00:20:13.618 { 00:20:13.618 "name": "BaseBdev2", 00:20:13.618 "uuid": "7c1488c4-fce2-58bd-89b0-a406720f71bd", 00:20:13.618 "is_configured": true, 00:20:13.618 "data_offset": 2048, 00:20:13.618 "data_size": 63488 00:20:13.618 }, 00:20:13.618 { 00:20:13.618 "name": "BaseBdev3", 00:20:13.618 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:13.618 "is_configured": true, 00:20:13.618 "data_offset": 2048, 00:20:13.618 "data_size": 63488 00:20:13.618 }, 00:20:13.618 { 00:20:13.618 "name": "BaseBdev4", 00:20:13.618 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:13.618 "is_configured": true, 00:20:13.618 "data_offset": 2048, 00:20:13.618 "data_size": 63488 00:20:13.618 } 00:20:13.618 ] 00:20:13.618 }' 00:20:13.618 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.876 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.876 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.876 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.876 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:13.876 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.876 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.876 [2024-12-06 13:15:00.731636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.876 [2024-12-06 13:15:00.765399] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:13.876 [2024-12-06 13:15:00.765541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.876 [2024-12-06 13:15:00.765574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.876 [2024-12-06 13:15:00.765592] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:13.876 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.876 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:13.876 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.876 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.877 "name": "raid_bdev1", 00:20:13.877 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:13.877 "strip_size_kb": 0, 00:20:13.877 "state": "online", 00:20:13.877 "raid_level": "raid1", 00:20:13.877 "superblock": true, 00:20:13.877 "num_base_bdevs": 4, 00:20:13.877 "num_base_bdevs_discovered": 3, 00:20:13.877 "num_base_bdevs_operational": 3, 00:20:13.877 "base_bdevs_list": [ 00:20:13.877 { 00:20:13.877 "name": null, 00:20:13.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.877 "is_configured": false, 00:20:13.877 "data_offset": 0, 00:20:13.877 "data_size": 63488 00:20:13.877 }, 00:20:13.877 { 00:20:13.877 "name": "BaseBdev2", 00:20:13.877 "uuid": "7c1488c4-fce2-58bd-89b0-a406720f71bd", 00:20:13.877 "is_configured": true, 00:20:13.877 "data_offset": 2048, 00:20:13.877 "data_size": 63488 00:20:13.877 }, 00:20:13.877 { 00:20:13.877 "name": "BaseBdev3", 00:20:13.877 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:13.877 "is_configured": true, 00:20:13.877 "data_offset": 2048, 00:20:13.877 "data_size": 63488 00:20:13.877 }, 00:20:13.877 { 00:20:13.877 "name": "BaseBdev4", 00:20:13.877 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:13.877 "is_configured": true, 00:20:13.877 "data_offset": 2048, 00:20:13.877 "data_size": 63488 00:20:13.877 } 00:20:13.877 ] 00:20:13.877 }' 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.877 13:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.442 "name": "raid_bdev1", 00:20:14.442 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:14.442 "strip_size_kb": 0, 00:20:14.442 "state": "online", 00:20:14.442 "raid_level": "raid1", 00:20:14.442 "superblock": true, 00:20:14.442 "num_base_bdevs": 4, 00:20:14.442 "num_base_bdevs_discovered": 3, 00:20:14.442 "num_base_bdevs_operational": 3, 00:20:14.442 "base_bdevs_list": [ 00:20:14.442 { 00:20:14.442 "name": null, 00:20:14.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.442 "is_configured": false, 00:20:14.442 "data_offset": 0, 00:20:14.442 "data_size": 63488 00:20:14.442 }, 00:20:14.442 { 00:20:14.442 "name": "BaseBdev2", 00:20:14.442 "uuid": "7c1488c4-fce2-58bd-89b0-a406720f71bd", 00:20:14.442 "is_configured": true, 00:20:14.442 "data_offset": 2048, 00:20:14.442 "data_size": 63488 00:20:14.442 }, 00:20:14.442 { 00:20:14.442 "name": "BaseBdev3", 00:20:14.442 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:14.442 "is_configured": true, 00:20:14.442 "data_offset": 2048, 00:20:14.442 "data_size": 63488 00:20:14.442 }, 00:20:14.442 { 00:20:14.442 "name": "BaseBdev4", 00:20:14.442 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:14.442 "is_configured": true, 00:20:14.442 "data_offset": 2048, 00:20:14.442 "data_size": 63488 00:20:14.442 } 00:20:14.442 ] 00:20:14.442 }' 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:14.442 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.700 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:14.700 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:14.700 13:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.700 13:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.700 [2024-12-06 13:15:01.481534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:14.700 [2024-12-06 13:15:01.495865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:20:14.700 13:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.700 13:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:14.700 [2024-12-06 13:15:01.498800] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:15.633 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.633 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.633 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.633 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.633 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.633 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.633 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.633 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.633 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.633 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.633 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.633 "name": "raid_bdev1", 00:20:15.633 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:15.633 "strip_size_kb": 0, 00:20:15.633 "state": "online", 00:20:15.633 "raid_level": "raid1", 00:20:15.633 "superblock": true, 00:20:15.633 "num_base_bdevs": 4, 00:20:15.633 "num_base_bdevs_discovered": 4, 00:20:15.633 "num_base_bdevs_operational": 4, 00:20:15.633 "process": { 00:20:15.633 "type": "rebuild", 00:20:15.634 "target": "spare", 00:20:15.634 "progress": { 00:20:15.634 "blocks": 20480, 00:20:15.634 "percent": 32 00:20:15.634 } 00:20:15.634 }, 00:20:15.634 "base_bdevs_list": [ 00:20:15.634 { 00:20:15.634 "name": "spare", 00:20:15.634 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:15.634 "is_configured": true, 00:20:15.634 "data_offset": 2048, 00:20:15.634 "data_size": 63488 00:20:15.634 }, 00:20:15.634 { 00:20:15.634 "name": "BaseBdev2", 00:20:15.634 "uuid": "7c1488c4-fce2-58bd-89b0-a406720f71bd", 00:20:15.634 "is_configured": true, 00:20:15.634 "data_offset": 2048, 00:20:15.634 "data_size": 63488 00:20:15.634 }, 00:20:15.634 { 00:20:15.634 "name": "BaseBdev3", 00:20:15.634 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:15.634 "is_configured": true, 00:20:15.634 "data_offset": 2048, 00:20:15.634 "data_size": 63488 00:20:15.634 }, 00:20:15.634 { 00:20:15.634 "name": "BaseBdev4", 00:20:15.634 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:15.634 "is_configured": true, 00:20:15.634 "data_offset": 2048, 00:20:15.634 "data_size": 63488 00:20:15.634 } 00:20:15.634 ] 00:20:15.634 }' 00:20:15.634 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.634 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.634 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:15.892 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.892 [2024-12-06 13:15:02.688907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:15.892 [2024-12-06 13:15:02.810750] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.892 "name": "raid_bdev1", 00:20:15.892 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:15.892 "strip_size_kb": 0, 00:20:15.892 "state": "online", 00:20:15.892 "raid_level": "raid1", 00:20:15.892 "superblock": true, 00:20:15.892 "num_base_bdevs": 4, 00:20:15.892 "num_base_bdevs_discovered": 3, 00:20:15.892 "num_base_bdevs_operational": 3, 00:20:15.892 "process": { 00:20:15.892 "type": "rebuild", 00:20:15.892 "target": "spare", 00:20:15.892 "progress": { 00:20:15.892 "blocks": 24576, 00:20:15.892 "percent": 38 00:20:15.892 } 00:20:15.892 }, 00:20:15.892 "base_bdevs_list": [ 00:20:15.892 { 00:20:15.892 "name": "spare", 00:20:15.892 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:15.892 "is_configured": true, 00:20:15.892 "data_offset": 2048, 00:20:15.892 "data_size": 63488 00:20:15.892 }, 00:20:15.892 { 00:20:15.892 "name": null, 00:20:15.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.892 "is_configured": false, 00:20:15.892 "data_offset": 0, 00:20:15.892 "data_size": 63488 00:20:15.892 }, 00:20:15.892 { 00:20:15.892 "name": "BaseBdev3", 00:20:15.892 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:15.892 "is_configured": true, 00:20:15.892 "data_offset": 2048, 00:20:15.892 "data_size": 63488 00:20:15.892 }, 00:20:15.892 { 00:20:15.892 "name": "BaseBdev4", 00:20:15.892 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:15.892 "is_configured": true, 00:20:15.892 "data_offset": 2048, 00:20:15.892 "data_size": 63488 00:20:15.892 } 00:20:15.892 ] 00:20:15.892 }' 00:20:15.892 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=513 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.150 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.151 13:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.151 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.151 "name": "raid_bdev1", 00:20:16.151 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:16.151 "strip_size_kb": 0, 00:20:16.151 "state": "online", 00:20:16.151 "raid_level": "raid1", 00:20:16.151 "superblock": true, 00:20:16.151 "num_base_bdevs": 4, 00:20:16.151 "num_base_bdevs_discovered": 3, 00:20:16.151 "num_base_bdevs_operational": 3, 00:20:16.151 "process": { 00:20:16.151 "type": "rebuild", 00:20:16.151 "target": "spare", 00:20:16.151 "progress": { 00:20:16.151 "blocks": 26624, 00:20:16.151 "percent": 41 00:20:16.151 } 00:20:16.151 }, 00:20:16.151 "base_bdevs_list": [ 00:20:16.151 { 00:20:16.151 "name": "spare", 00:20:16.151 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:16.151 "is_configured": true, 00:20:16.151 "data_offset": 2048, 00:20:16.151 "data_size": 63488 00:20:16.151 }, 00:20:16.151 { 00:20:16.151 "name": null, 00:20:16.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.151 "is_configured": false, 00:20:16.151 "data_offset": 0, 00:20:16.151 "data_size": 63488 00:20:16.151 }, 00:20:16.151 { 00:20:16.151 "name": "BaseBdev3", 00:20:16.151 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:16.151 "is_configured": true, 00:20:16.151 "data_offset": 2048, 00:20:16.151 "data_size": 63488 00:20:16.151 }, 00:20:16.151 { 00:20:16.151 "name": "BaseBdev4", 00:20:16.151 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:16.151 "is_configured": true, 00:20:16.151 "data_offset": 2048, 00:20:16.151 "data_size": 63488 00:20:16.151 } 00:20:16.151 ] 00:20:16.151 }' 00:20:16.151 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.151 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.151 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.151 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.151 13:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.524 "name": "raid_bdev1", 00:20:17.524 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:17.524 "strip_size_kb": 0, 00:20:17.524 "state": "online", 00:20:17.524 "raid_level": "raid1", 00:20:17.524 "superblock": true, 00:20:17.524 "num_base_bdevs": 4, 00:20:17.524 "num_base_bdevs_discovered": 3, 00:20:17.524 "num_base_bdevs_operational": 3, 00:20:17.524 "process": { 00:20:17.524 "type": "rebuild", 00:20:17.524 "target": "spare", 00:20:17.524 "progress": { 00:20:17.524 "blocks": 51200, 00:20:17.524 "percent": 80 00:20:17.524 } 00:20:17.524 }, 00:20:17.524 "base_bdevs_list": [ 00:20:17.524 { 00:20:17.524 "name": "spare", 00:20:17.524 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:17.524 "is_configured": true, 00:20:17.524 "data_offset": 2048, 00:20:17.524 "data_size": 63488 00:20:17.524 }, 00:20:17.524 { 00:20:17.524 "name": null, 00:20:17.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.524 "is_configured": false, 00:20:17.524 "data_offset": 0, 00:20:17.524 "data_size": 63488 00:20:17.524 }, 00:20:17.524 { 00:20:17.524 "name": "BaseBdev3", 00:20:17.524 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:17.524 "is_configured": true, 00:20:17.524 "data_offset": 2048, 00:20:17.524 "data_size": 63488 00:20:17.524 }, 00:20:17.524 { 00:20:17.524 "name": "BaseBdev4", 00:20:17.524 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:17.524 "is_configured": true, 00:20:17.524 "data_offset": 2048, 00:20:17.524 "data_size": 63488 00:20:17.524 } 00:20:17.524 ] 00:20:17.524 }' 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.524 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.525 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.525 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.525 13:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:17.785 [2024-12-06 13:15:04.729339] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:17.785 [2024-12-06 13:15:04.729535] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:17.785 [2024-12-06 13:15:04.729751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.351 "name": "raid_bdev1", 00:20:18.351 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:18.351 "strip_size_kb": 0, 00:20:18.351 "state": "online", 00:20:18.351 "raid_level": "raid1", 00:20:18.351 "superblock": true, 00:20:18.351 "num_base_bdevs": 4, 00:20:18.351 "num_base_bdevs_discovered": 3, 00:20:18.351 "num_base_bdevs_operational": 3, 00:20:18.351 "base_bdevs_list": [ 00:20:18.351 { 00:20:18.351 "name": "spare", 00:20:18.351 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:18.351 "is_configured": true, 00:20:18.351 "data_offset": 2048, 00:20:18.351 "data_size": 63488 00:20:18.351 }, 00:20:18.351 { 00:20:18.351 "name": null, 00:20:18.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.351 "is_configured": false, 00:20:18.351 "data_offset": 0, 00:20:18.351 "data_size": 63488 00:20:18.351 }, 00:20:18.351 { 00:20:18.351 "name": "BaseBdev3", 00:20:18.351 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:18.351 "is_configured": true, 00:20:18.351 "data_offset": 2048, 00:20:18.351 "data_size": 63488 00:20:18.351 }, 00:20:18.351 { 00:20:18.351 "name": "BaseBdev4", 00:20:18.351 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:18.351 "is_configured": true, 00:20:18.351 "data_offset": 2048, 00:20:18.351 "data_size": 63488 00:20:18.351 } 00:20:18.351 ] 00:20:18.351 }' 00:20:18.351 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.608 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.608 "name": "raid_bdev1", 00:20:18.608 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:18.608 "strip_size_kb": 0, 00:20:18.608 "state": "online", 00:20:18.608 "raid_level": "raid1", 00:20:18.608 "superblock": true, 00:20:18.608 "num_base_bdevs": 4, 00:20:18.608 "num_base_bdevs_discovered": 3, 00:20:18.608 "num_base_bdevs_operational": 3, 00:20:18.608 "base_bdevs_list": [ 00:20:18.608 { 00:20:18.608 "name": "spare", 00:20:18.608 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:18.608 "is_configured": true, 00:20:18.608 "data_offset": 2048, 00:20:18.608 "data_size": 63488 00:20:18.608 }, 00:20:18.608 { 00:20:18.608 "name": null, 00:20:18.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.608 "is_configured": false, 00:20:18.608 "data_offset": 0, 00:20:18.608 "data_size": 63488 00:20:18.608 }, 00:20:18.608 { 00:20:18.608 "name": "BaseBdev3", 00:20:18.609 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:18.609 "is_configured": true, 00:20:18.609 "data_offset": 2048, 00:20:18.609 "data_size": 63488 00:20:18.609 }, 00:20:18.609 { 00:20:18.609 "name": "BaseBdev4", 00:20:18.609 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:18.609 "is_configured": true, 00:20:18.609 "data_offset": 2048, 00:20:18.609 "data_size": 63488 00:20:18.609 } 00:20:18.609 ] 00:20:18.609 }' 00:20:18.609 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.609 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:18.609 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.866 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.866 "name": "raid_bdev1", 00:20:18.866 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:18.866 "strip_size_kb": 0, 00:20:18.866 "state": "online", 00:20:18.867 "raid_level": "raid1", 00:20:18.867 "superblock": true, 00:20:18.867 "num_base_bdevs": 4, 00:20:18.867 "num_base_bdevs_discovered": 3, 00:20:18.867 "num_base_bdevs_operational": 3, 00:20:18.867 "base_bdevs_list": [ 00:20:18.867 { 00:20:18.867 "name": "spare", 00:20:18.867 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:18.867 "is_configured": true, 00:20:18.867 "data_offset": 2048, 00:20:18.867 "data_size": 63488 00:20:18.867 }, 00:20:18.867 { 00:20:18.867 "name": null, 00:20:18.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.867 "is_configured": false, 00:20:18.867 "data_offset": 0, 00:20:18.867 "data_size": 63488 00:20:18.867 }, 00:20:18.867 { 00:20:18.867 "name": "BaseBdev3", 00:20:18.867 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:18.867 "is_configured": true, 00:20:18.867 "data_offset": 2048, 00:20:18.867 "data_size": 63488 00:20:18.867 }, 00:20:18.867 { 00:20:18.867 "name": "BaseBdev4", 00:20:18.867 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:18.867 "is_configured": true, 00:20:18.867 "data_offset": 2048, 00:20:18.867 "data_size": 63488 00:20:18.867 } 00:20:18.867 ] 00:20:18.867 }' 00:20:18.867 13:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.867 13:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.444 [2024-12-06 13:15:06.198618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:19.444 [2024-12-06 13:15:06.198836] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.444 [2024-12-06 13:15:06.198988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.444 [2024-12-06 13:15:06.199140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.444 [2024-12-06 13:15:06.199174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:19.444 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:19.702 /dev/nbd0 00:20:19.702 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:19.702 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:19.702 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:19.702 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:19.702 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:19.702 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:19.702 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:19.702 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:19.703 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:19.703 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:19.703 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.703 1+0 records in 00:20:19.703 1+0 records out 00:20:19.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439462 s, 9.3 MB/s 00:20:19.703 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.703 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:19.703 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.703 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:19.703 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:19.703 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:19.703 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:19.703 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:19.961 /dev/nbd1 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.961 1+0 records in 00:20:19.961 1+0 records out 00:20:19.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539701 s, 7.6 MB/s 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:19.961 13:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:20.220 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:20.220 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:20.220 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:20.220 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:20.220 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:20.220 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:20.220 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:20.787 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:20.787 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:20.787 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:20.787 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:20.787 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:20.787 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:20.787 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:20.787 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:20.787 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:20.787 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.045 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.045 [2024-12-06 13:15:07.837707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:21.045 [2024-12-06 13:15:07.837782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.045 [2024-12-06 13:15:07.837819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:21.046 [2024-12-06 13:15:07.837836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.046 [2024-12-06 13:15:07.841102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.046 [2024-12-06 13:15:07.841149] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:21.046 [2024-12-06 13:15:07.841310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:21.046 [2024-12-06 13:15:07.841387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:21.046 [2024-12-06 13:15:07.841602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.046 [2024-12-06 13:15:07.841762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:21.046 spare 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.046 [2024-12-06 13:15:07.941972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:21.046 [2024-12-06 13:15:07.942055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:21.046 [2024-12-06 13:15:07.942648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:20:21.046 [2024-12-06 13:15:07.943023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:21.046 [2024-12-06 13:15:07.943057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:21.046 [2024-12-06 13:15:07.943325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.046 13:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.046 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.046 "name": "raid_bdev1", 00:20:21.046 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:21.046 "strip_size_kb": 0, 00:20:21.046 "state": "online", 00:20:21.046 "raid_level": "raid1", 00:20:21.046 "superblock": true, 00:20:21.046 "num_base_bdevs": 4, 00:20:21.046 "num_base_bdevs_discovered": 3, 00:20:21.046 "num_base_bdevs_operational": 3, 00:20:21.046 "base_bdevs_list": [ 00:20:21.046 { 00:20:21.046 "name": "spare", 00:20:21.046 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:21.046 "is_configured": true, 00:20:21.046 "data_offset": 2048, 00:20:21.046 "data_size": 63488 00:20:21.046 }, 00:20:21.046 { 00:20:21.046 "name": null, 00:20:21.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.046 "is_configured": false, 00:20:21.046 "data_offset": 2048, 00:20:21.046 "data_size": 63488 00:20:21.046 }, 00:20:21.046 { 00:20:21.046 "name": "BaseBdev3", 00:20:21.046 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:21.046 "is_configured": true, 00:20:21.046 "data_offset": 2048, 00:20:21.046 "data_size": 63488 00:20:21.046 }, 00:20:21.046 { 00:20:21.046 "name": "BaseBdev4", 00:20:21.046 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:21.046 "is_configured": true, 00:20:21.046 "data_offset": 2048, 00:20:21.046 "data_size": 63488 00:20:21.046 } 00:20:21.046 ] 00:20:21.046 }' 00:20:21.046 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.046 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.613 "name": "raid_bdev1", 00:20:21.613 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:21.613 "strip_size_kb": 0, 00:20:21.613 "state": "online", 00:20:21.613 "raid_level": "raid1", 00:20:21.613 "superblock": true, 00:20:21.613 "num_base_bdevs": 4, 00:20:21.613 "num_base_bdevs_discovered": 3, 00:20:21.613 "num_base_bdevs_operational": 3, 00:20:21.613 "base_bdevs_list": [ 00:20:21.613 { 00:20:21.613 "name": "spare", 00:20:21.613 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:21.613 "is_configured": true, 00:20:21.613 "data_offset": 2048, 00:20:21.613 "data_size": 63488 00:20:21.613 }, 00:20:21.613 { 00:20:21.613 "name": null, 00:20:21.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.613 "is_configured": false, 00:20:21.613 "data_offset": 2048, 00:20:21.613 "data_size": 63488 00:20:21.613 }, 00:20:21.613 { 00:20:21.613 "name": "BaseBdev3", 00:20:21.613 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:21.613 "is_configured": true, 00:20:21.613 "data_offset": 2048, 00:20:21.613 "data_size": 63488 00:20:21.613 }, 00:20:21.613 { 00:20:21.613 "name": "BaseBdev4", 00:20:21.613 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:21.613 "is_configured": true, 00:20:21.613 "data_offset": 2048, 00:20:21.613 "data_size": 63488 00:20:21.613 } 00:20:21.613 ] 00:20:21.613 }' 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.613 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.872 [2024-12-06 13:15:08.670343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.872 "name": "raid_bdev1", 00:20:21.872 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:21.872 "strip_size_kb": 0, 00:20:21.872 "state": "online", 00:20:21.872 "raid_level": "raid1", 00:20:21.872 "superblock": true, 00:20:21.872 "num_base_bdevs": 4, 00:20:21.872 "num_base_bdevs_discovered": 2, 00:20:21.872 "num_base_bdevs_operational": 2, 00:20:21.872 "base_bdevs_list": [ 00:20:21.872 { 00:20:21.872 "name": null, 00:20:21.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.872 "is_configured": false, 00:20:21.872 "data_offset": 0, 00:20:21.872 "data_size": 63488 00:20:21.872 }, 00:20:21.872 { 00:20:21.872 "name": null, 00:20:21.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.872 "is_configured": false, 00:20:21.872 "data_offset": 2048, 00:20:21.872 "data_size": 63488 00:20:21.872 }, 00:20:21.872 { 00:20:21.872 "name": "BaseBdev3", 00:20:21.872 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:21.872 "is_configured": true, 00:20:21.872 "data_offset": 2048, 00:20:21.872 "data_size": 63488 00:20:21.872 }, 00:20:21.872 { 00:20:21.872 "name": "BaseBdev4", 00:20:21.872 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:21.872 "is_configured": true, 00:20:21.872 "data_offset": 2048, 00:20:21.872 "data_size": 63488 00:20:21.872 } 00:20:21.872 ] 00:20:21.872 }' 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.872 13:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.439 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:22.439 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.439 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.439 [2024-12-06 13:15:09.182490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:22.439 [2024-12-06 13:15:09.182871] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:22.439 [2024-12-06 13:15:09.182900] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:22.439 [2024-12-06 13:15:09.182950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:22.439 [2024-12-06 13:15:09.197274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:20:22.439 13:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.439 13:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:22.439 [2024-12-06 13:15:09.200069] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:23.373 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.373 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.373 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.373 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.373 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.373 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.374 "name": "raid_bdev1", 00:20:23.374 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:23.374 "strip_size_kb": 0, 00:20:23.374 "state": "online", 00:20:23.374 "raid_level": "raid1", 00:20:23.374 "superblock": true, 00:20:23.374 "num_base_bdevs": 4, 00:20:23.374 "num_base_bdevs_discovered": 3, 00:20:23.374 "num_base_bdevs_operational": 3, 00:20:23.374 "process": { 00:20:23.374 "type": "rebuild", 00:20:23.374 "target": "spare", 00:20:23.374 "progress": { 00:20:23.374 "blocks": 20480, 00:20:23.374 "percent": 32 00:20:23.374 } 00:20:23.374 }, 00:20:23.374 "base_bdevs_list": [ 00:20:23.374 { 00:20:23.374 "name": "spare", 00:20:23.374 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:23.374 "is_configured": true, 00:20:23.374 "data_offset": 2048, 00:20:23.374 "data_size": 63488 00:20:23.374 }, 00:20:23.374 { 00:20:23.374 "name": null, 00:20:23.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.374 "is_configured": false, 00:20:23.374 "data_offset": 2048, 00:20:23.374 "data_size": 63488 00:20:23.374 }, 00:20:23.374 { 00:20:23.374 "name": "BaseBdev3", 00:20:23.374 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:23.374 "is_configured": true, 00:20:23.374 "data_offset": 2048, 00:20:23.374 "data_size": 63488 00:20:23.374 }, 00:20:23.374 { 00:20:23.374 "name": "BaseBdev4", 00:20:23.374 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:23.374 "is_configured": true, 00:20:23.374 "data_offset": 2048, 00:20:23.374 "data_size": 63488 00:20:23.374 } 00:20:23.374 ] 00:20:23.374 }' 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.374 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.633 [2024-12-06 13:15:10.390255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:23.633 [2024-12-06 13:15:10.411949] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:23.633 [2024-12-06 13:15:10.412054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.633 [2024-12-06 13:15:10.412087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:23.633 [2024-12-06 13:15:10.412100] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.633 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.634 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.634 "name": "raid_bdev1", 00:20:23.634 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:23.634 "strip_size_kb": 0, 00:20:23.634 "state": "online", 00:20:23.634 "raid_level": "raid1", 00:20:23.634 "superblock": true, 00:20:23.634 "num_base_bdevs": 4, 00:20:23.634 "num_base_bdevs_discovered": 2, 00:20:23.634 "num_base_bdevs_operational": 2, 00:20:23.634 "base_bdevs_list": [ 00:20:23.634 { 00:20:23.634 "name": null, 00:20:23.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.634 "is_configured": false, 00:20:23.634 "data_offset": 0, 00:20:23.634 "data_size": 63488 00:20:23.634 }, 00:20:23.634 { 00:20:23.634 "name": null, 00:20:23.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.634 "is_configured": false, 00:20:23.634 "data_offset": 2048, 00:20:23.634 "data_size": 63488 00:20:23.634 }, 00:20:23.634 { 00:20:23.634 "name": "BaseBdev3", 00:20:23.634 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:23.634 "is_configured": true, 00:20:23.634 "data_offset": 2048, 00:20:23.634 "data_size": 63488 00:20:23.634 }, 00:20:23.634 { 00:20:23.634 "name": "BaseBdev4", 00:20:23.634 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:23.634 "is_configured": true, 00:20:23.634 "data_offset": 2048, 00:20:23.634 "data_size": 63488 00:20:23.634 } 00:20:23.634 ] 00:20:23.634 }' 00:20:23.634 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.634 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.208 13:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:24.208 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.208 13:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.208 [2024-12-06 13:15:11.005654] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:24.208 [2024-12-06 13:15:11.005745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.208 [2024-12-06 13:15:11.005794] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:24.208 [2024-12-06 13:15:11.005812] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.208 [2024-12-06 13:15:11.006579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.208 [2024-12-06 13:15:11.006612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:24.208 [2024-12-06 13:15:11.006754] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:24.208 [2024-12-06 13:15:11.006775] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:24.208 [2024-12-06 13:15:11.006811] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:24.208 [2024-12-06 13:15:11.006847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.208 [2024-12-06 13:15:11.020469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:20:24.208 spare 00:20:24.208 13:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.208 13:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:24.208 [2024-12-06 13:15:11.023542] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:25.139 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.139 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.139 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:25.139 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:25.139 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.139 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.139 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.139 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.139 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.139 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.139 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.139 "name": "raid_bdev1", 00:20:25.139 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:25.139 "strip_size_kb": 0, 00:20:25.139 "state": "online", 00:20:25.139 "raid_level": "raid1", 00:20:25.139 "superblock": true, 00:20:25.139 "num_base_bdevs": 4, 00:20:25.139 "num_base_bdevs_discovered": 3, 00:20:25.139 "num_base_bdevs_operational": 3, 00:20:25.139 "process": { 00:20:25.139 "type": "rebuild", 00:20:25.139 "target": "spare", 00:20:25.139 "progress": { 00:20:25.139 "blocks": 20480, 00:20:25.139 "percent": 32 00:20:25.139 } 00:20:25.139 }, 00:20:25.139 "base_bdevs_list": [ 00:20:25.139 { 00:20:25.139 "name": "spare", 00:20:25.139 "uuid": "4583d0f2-c4c5-5ebc-92fa-f2e693c49633", 00:20:25.139 "is_configured": true, 00:20:25.139 "data_offset": 2048, 00:20:25.139 "data_size": 63488 00:20:25.139 }, 00:20:25.139 { 00:20:25.139 "name": null, 00:20:25.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.139 "is_configured": false, 00:20:25.139 "data_offset": 2048, 00:20:25.139 "data_size": 63488 00:20:25.139 }, 00:20:25.139 { 00:20:25.139 "name": "BaseBdev3", 00:20:25.139 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:25.139 "is_configured": true, 00:20:25.140 "data_offset": 2048, 00:20:25.140 "data_size": 63488 00:20:25.140 }, 00:20:25.140 { 00:20:25.140 "name": "BaseBdev4", 00:20:25.140 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:25.140 "is_configured": true, 00:20:25.140 "data_offset": 2048, 00:20:25.140 "data_size": 63488 00:20:25.140 } 00:20:25.140 ] 00:20:25.140 }' 00:20:25.140 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.140 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.140 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.395 [2024-12-06 13:15:12.193648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.395 [2024-12-06 13:15:12.235360] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:25.395 [2024-12-06 13:15:12.235455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.395 [2024-12-06 13:15:12.235496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.395 [2024-12-06 13:15:12.235531] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.395 "name": "raid_bdev1", 00:20:25.395 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:25.395 "strip_size_kb": 0, 00:20:25.395 "state": "online", 00:20:25.395 "raid_level": "raid1", 00:20:25.395 "superblock": true, 00:20:25.395 "num_base_bdevs": 4, 00:20:25.395 "num_base_bdevs_discovered": 2, 00:20:25.395 "num_base_bdevs_operational": 2, 00:20:25.395 "base_bdevs_list": [ 00:20:25.395 { 00:20:25.395 "name": null, 00:20:25.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.395 "is_configured": false, 00:20:25.395 "data_offset": 0, 00:20:25.395 "data_size": 63488 00:20:25.395 }, 00:20:25.395 { 00:20:25.395 "name": null, 00:20:25.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.395 "is_configured": false, 00:20:25.395 "data_offset": 2048, 00:20:25.395 "data_size": 63488 00:20:25.395 }, 00:20:25.395 { 00:20:25.395 "name": "BaseBdev3", 00:20:25.395 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:25.395 "is_configured": true, 00:20:25.395 "data_offset": 2048, 00:20:25.395 "data_size": 63488 00:20:25.395 }, 00:20:25.395 { 00:20:25.395 "name": "BaseBdev4", 00:20:25.395 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:25.395 "is_configured": true, 00:20:25.395 "data_offset": 2048, 00:20:25.395 "data_size": 63488 00:20:25.395 } 00:20:25.395 ] 00:20:25.395 }' 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.395 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.958 "name": "raid_bdev1", 00:20:25.958 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:25.958 "strip_size_kb": 0, 00:20:25.958 "state": "online", 00:20:25.958 "raid_level": "raid1", 00:20:25.958 "superblock": true, 00:20:25.958 "num_base_bdevs": 4, 00:20:25.958 "num_base_bdevs_discovered": 2, 00:20:25.958 "num_base_bdevs_operational": 2, 00:20:25.958 "base_bdevs_list": [ 00:20:25.958 { 00:20:25.958 "name": null, 00:20:25.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.958 "is_configured": false, 00:20:25.958 "data_offset": 0, 00:20:25.958 "data_size": 63488 00:20:25.958 }, 00:20:25.958 { 00:20:25.958 "name": null, 00:20:25.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.958 "is_configured": false, 00:20:25.958 "data_offset": 2048, 00:20:25.958 "data_size": 63488 00:20:25.958 }, 00:20:25.958 { 00:20:25.958 "name": "BaseBdev3", 00:20:25.958 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:25.958 "is_configured": true, 00:20:25.958 "data_offset": 2048, 00:20:25.958 "data_size": 63488 00:20:25.958 }, 00:20:25.958 { 00:20:25.958 "name": "BaseBdev4", 00:20:25.958 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:25.958 "is_configured": true, 00:20:25.958 "data_offset": 2048, 00:20:25.958 "data_size": 63488 00:20:25.958 } 00:20:25.958 ] 00:20:25.958 }' 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.958 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:25.959 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.959 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.959 [2024-12-06 13:15:12.945130] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:25.959 [2024-12-06 13:15:12.945218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.959 [2024-12-06 13:15:12.945250] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:20:25.959 [2024-12-06 13:15:12.945269] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.959 [2024-12-06 13:15:12.945936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.959 [2024-12-06 13:15:12.945975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:25.959 [2024-12-06 13:15:12.946087] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:25.959 [2024-12-06 13:15:12.946122] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:25.959 [2024-12-06 13:15:12.946135] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:25.959 [2024-12-06 13:15:12.946168] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:25.959 BaseBdev1 00:20:25.959 13:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.959 13:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.326 13:15:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.326 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.326 "name": "raid_bdev1", 00:20:27.326 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:27.326 "strip_size_kb": 0, 00:20:27.326 "state": "online", 00:20:27.326 "raid_level": "raid1", 00:20:27.326 "superblock": true, 00:20:27.326 "num_base_bdevs": 4, 00:20:27.326 "num_base_bdevs_discovered": 2, 00:20:27.326 "num_base_bdevs_operational": 2, 00:20:27.326 "base_bdevs_list": [ 00:20:27.326 { 00:20:27.326 "name": null, 00:20:27.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.326 "is_configured": false, 00:20:27.326 "data_offset": 0, 00:20:27.326 "data_size": 63488 00:20:27.326 }, 00:20:27.326 { 00:20:27.326 "name": null, 00:20:27.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.326 "is_configured": false, 00:20:27.326 "data_offset": 2048, 00:20:27.326 "data_size": 63488 00:20:27.326 }, 00:20:27.326 { 00:20:27.326 "name": "BaseBdev3", 00:20:27.326 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:27.326 "is_configured": true, 00:20:27.326 "data_offset": 2048, 00:20:27.326 "data_size": 63488 00:20:27.326 }, 00:20:27.326 { 00:20:27.326 "name": "BaseBdev4", 00:20:27.326 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:27.326 "is_configured": true, 00:20:27.326 "data_offset": 2048, 00:20:27.326 "data_size": 63488 00:20:27.326 } 00:20:27.326 ] 00:20:27.326 }' 00:20:27.326 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.326 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.583 "name": "raid_bdev1", 00:20:27.583 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:27.583 "strip_size_kb": 0, 00:20:27.583 "state": "online", 00:20:27.583 "raid_level": "raid1", 00:20:27.583 "superblock": true, 00:20:27.583 "num_base_bdevs": 4, 00:20:27.583 "num_base_bdevs_discovered": 2, 00:20:27.583 "num_base_bdevs_operational": 2, 00:20:27.583 "base_bdevs_list": [ 00:20:27.583 { 00:20:27.583 "name": null, 00:20:27.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.583 "is_configured": false, 00:20:27.583 "data_offset": 0, 00:20:27.583 "data_size": 63488 00:20:27.583 }, 00:20:27.583 { 00:20:27.583 "name": null, 00:20:27.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.583 "is_configured": false, 00:20:27.583 "data_offset": 2048, 00:20:27.583 "data_size": 63488 00:20:27.583 }, 00:20:27.583 { 00:20:27.583 "name": "BaseBdev3", 00:20:27.583 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:27.583 "is_configured": true, 00:20:27.583 "data_offset": 2048, 00:20:27.583 "data_size": 63488 00:20:27.583 }, 00:20:27.583 { 00:20:27.583 "name": "BaseBdev4", 00:20:27.583 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:27.583 "is_configured": true, 00:20:27.583 "data_offset": 2048, 00:20:27.583 "data_size": 63488 00:20:27.583 } 00:20:27.583 ] 00:20:27.583 }' 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:27.583 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.840 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:27.840 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:27.840 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:27.840 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.841 [2024-12-06 13:15:14.633807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:27.841 [2024-12-06 13:15:14.634194] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:27.841 [2024-12-06 13:15:14.634234] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:27.841 request: 00:20:27.841 { 00:20:27.841 "base_bdev": "BaseBdev1", 00:20:27.841 "raid_bdev": "raid_bdev1", 00:20:27.841 "method": "bdev_raid_add_base_bdev", 00:20:27.841 "req_id": 1 00:20:27.841 } 00:20:27.841 Got JSON-RPC error response 00:20:27.841 response: 00:20:27.841 { 00:20:27.841 "code": -22, 00:20:27.841 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:27.841 } 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:27.841 13:15:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.855 "name": "raid_bdev1", 00:20:28.855 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:28.855 "strip_size_kb": 0, 00:20:28.855 "state": "online", 00:20:28.855 "raid_level": "raid1", 00:20:28.855 "superblock": true, 00:20:28.855 "num_base_bdevs": 4, 00:20:28.855 "num_base_bdevs_discovered": 2, 00:20:28.855 "num_base_bdevs_operational": 2, 00:20:28.855 "base_bdevs_list": [ 00:20:28.855 { 00:20:28.855 "name": null, 00:20:28.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.855 "is_configured": false, 00:20:28.855 "data_offset": 0, 00:20:28.855 "data_size": 63488 00:20:28.855 }, 00:20:28.855 { 00:20:28.855 "name": null, 00:20:28.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.855 "is_configured": false, 00:20:28.855 "data_offset": 2048, 00:20:28.855 "data_size": 63488 00:20:28.855 }, 00:20:28.855 { 00:20:28.855 "name": "BaseBdev3", 00:20:28.855 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:28.855 "is_configured": true, 00:20:28.855 "data_offset": 2048, 00:20:28.855 "data_size": 63488 00:20:28.855 }, 00:20:28.855 { 00:20:28.855 "name": "BaseBdev4", 00:20:28.855 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:28.855 "is_configured": true, 00:20:28.855 "data_offset": 2048, 00:20:28.855 "data_size": 63488 00:20:28.855 } 00:20:28.855 ] 00:20:28.855 }' 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.855 13:15:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.420 "name": "raid_bdev1", 00:20:29.420 "uuid": "d26f6d65-bd94-4fb4-9164-add074f3763e", 00:20:29.420 "strip_size_kb": 0, 00:20:29.420 "state": "online", 00:20:29.420 "raid_level": "raid1", 00:20:29.420 "superblock": true, 00:20:29.420 "num_base_bdevs": 4, 00:20:29.420 "num_base_bdevs_discovered": 2, 00:20:29.420 "num_base_bdevs_operational": 2, 00:20:29.420 "base_bdevs_list": [ 00:20:29.420 { 00:20:29.420 "name": null, 00:20:29.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.420 "is_configured": false, 00:20:29.420 "data_offset": 0, 00:20:29.420 "data_size": 63488 00:20:29.420 }, 00:20:29.420 { 00:20:29.420 "name": null, 00:20:29.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.420 "is_configured": false, 00:20:29.420 "data_offset": 2048, 00:20:29.420 "data_size": 63488 00:20:29.420 }, 00:20:29.420 { 00:20:29.420 "name": "BaseBdev3", 00:20:29.420 "uuid": "a02a92a6-809f-5eb5-84ac-c24ae7f679cd", 00:20:29.420 "is_configured": true, 00:20:29.420 "data_offset": 2048, 00:20:29.420 "data_size": 63488 00:20:29.420 }, 00:20:29.420 { 00:20:29.420 "name": "BaseBdev4", 00:20:29.420 "uuid": "3dd99965-2f8d-59a0-b12d-3804bd5e6d01", 00:20:29.420 "is_configured": true, 00:20:29.420 "data_offset": 2048, 00:20:29.420 "data_size": 63488 00:20:29.420 } 00:20:29.420 ] 00:20:29.420 }' 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78585 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78585 ']' 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78585 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78585 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:29.420 killing process with pid 78585 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78585' 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78585 00:20:29.420 Received shutdown signal, test time was about 60.000000 seconds 00:20:29.420 00:20:29.420 Latency(us) 00:20:29.420 [2024-12-06T13:15:16.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.420 [2024-12-06T13:15:16.436Z] =================================================================================================================== 00:20:29.420 [2024-12-06T13:15:16.436Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:29.420 [2024-12-06 13:15:16.364648] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:29.420 13:15:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78585 00:20:29.420 [2024-12-06 13:15:16.364818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.420 [2024-12-06 13:15:16.364925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:29.420 [2024-12-06 13:15:16.364955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:29.987 [2024-12-06 13:15:16.841229] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:31.362 13:15:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:31.362 00:20:31.362 real 0m30.066s 00:20:31.362 user 0m36.560s 00:20:31.362 sys 0m4.214s 00:20:31.362 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:31.362 ************************************ 00:20:31.362 END TEST raid_rebuild_test_sb 00:20:31.362 ************************************ 00:20:31.362 13:15:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.362 13:15:18 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:20:31.362 13:15:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:31.362 13:15:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.362 13:15:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:31.362 ************************************ 00:20:31.362 START TEST raid_rebuild_test_io 00:20:31.362 ************************************ 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:31.362 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79391 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79391 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79391 ']' 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.363 13:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:31.363 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:31.363 Zero copy mechanism will not be used. 00:20:31.363 [2024-12-06 13:15:18.175244] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:20:31.363 [2024-12-06 13:15:18.175461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79391 ] 00:20:31.363 [2024-12-06 13:15:18.366216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.620 [2024-12-06 13:15:18.512299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.878 [2024-12-06 13:15:18.723913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.878 [2024-12-06 13:15:18.724013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:32.444 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.444 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:20:32.444 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.445 BaseBdev1_malloc 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.445 [2024-12-06 13:15:19.273663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:32.445 [2024-12-06 13:15:19.273770] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.445 [2024-12-06 13:15:19.273806] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:32.445 [2024-12-06 13:15:19.273842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.445 [2024-12-06 13:15:19.277013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.445 [2024-12-06 13:15:19.277091] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:32.445 BaseBdev1 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.445 BaseBdev2_malloc 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.445 [2024-12-06 13:15:19.328221] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:32.445 [2024-12-06 13:15:19.328316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.445 [2024-12-06 13:15:19.328348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:32.445 [2024-12-06 13:15:19.328367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.445 [2024-12-06 13:15:19.331441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.445 [2024-12-06 13:15:19.331558] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:32.445 BaseBdev2 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.445 BaseBdev3_malloc 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.445 [2024-12-06 13:15:19.396257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:32.445 [2024-12-06 13:15:19.396341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.445 [2024-12-06 13:15:19.396376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:32.445 [2024-12-06 13:15:19.396396] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.445 [2024-12-06 13:15:19.399417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.445 [2024-12-06 13:15:19.399476] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:32.445 BaseBdev3 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.445 BaseBdev4_malloc 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.445 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.445 [2024-12-06 13:15:19.453583] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:32.445 [2024-12-06 13:15:19.453673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.445 [2024-12-06 13:15:19.453704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:32.445 [2024-12-06 13:15:19.453722] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.445 [2024-12-06 13:15:19.456584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.445 [2024-12-06 13:15:19.456645] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:32.445 BaseBdev4 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.704 spare_malloc 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.704 spare_delay 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.704 [2024-12-06 13:15:19.525353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:32.704 [2024-12-06 13:15:19.525472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.704 [2024-12-06 13:15:19.525540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:32.704 [2024-12-06 13:15:19.525561] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.704 [2024-12-06 13:15:19.528711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.704 [2024-12-06 13:15:19.528790] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:32.704 spare 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.704 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.704 [2024-12-06 13:15:19.533666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:32.704 [2024-12-06 13:15:19.536414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:32.704 [2024-12-06 13:15:19.536507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:32.704 [2024-12-06 13:15:19.536607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:32.705 [2024-12-06 13:15:19.536733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:32.705 [2024-12-06 13:15:19.536756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:32.705 [2024-12-06 13:15:19.537084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:32.705 [2024-12-06 13:15:19.537323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:32.705 [2024-12-06 13:15:19.537344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:32.705 [2024-12-06 13:15:19.537608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.705 "name": "raid_bdev1", 00:20:32.705 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:32.705 "strip_size_kb": 0, 00:20:32.705 "state": "online", 00:20:32.705 "raid_level": "raid1", 00:20:32.705 "superblock": false, 00:20:32.705 "num_base_bdevs": 4, 00:20:32.705 "num_base_bdevs_discovered": 4, 00:20:32.705 "num_base_bdevs_operational": 4, 00:20:32.705 "base_bdevs_list": [ 00:20:32.705 { 00:20:32.705 "name": "BaseBdev1", 00:20:32.705 "uuid": "9ea1c5a0-7e8e-516b-b341-d9ffb21ba6de", 00:20:32.705 "is_configured": true, 00:20:32.705 "data_offset": 0, 00:20:32.705 "data_size": 65536 00:20:32.705 }, 00:20:32.705 { 00:20:32.705 "name": "BaseBdev2", 00:20:32.705 "uuid": "97abe3d1-9f5a-5ea0-9b9c-01b07dfc1289", 00:20:32.705 "is_configured": true, 00:20:32.705 "data_offset": 0, 00:20:32.705 "data_size": 65536 00:20:32.705 }, 00:20:32.705 { 00:20:32.705 "name": "BaseBdev3", 00:20:32.705 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:32.705 "is_configured": true, 00:20:32.705 "data_offset": 0, 00:20:32.705 "data_size": 65536 00:20:32.705 }, 00:20:32.705 { 00:20:32.705 "name": "BaseBdev4", 00:20:32.705 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:32.705 "is_configured": true, 00:20:32.705 "data_offset": 0, 00:20:32.705 "data_size": 65536 00:20:32.705 } 00:20:32.705 ] 00:20:32.705 }' 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.705 13:15:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.271 [2024-12-06 13:15:20.034384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.271 [2024-12-06 13:15:20.125901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.271 "name": "raid_bdev1", 00:20:33.271 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:33.271 "strip_size_kb": 0, 00:20:33.271 "state": "online", 00:20:33.271 "raid_level": "raid1", 00:20:33.271 "superblock": false, 00:20:33.271 "num_base_bdevs": 4, 00:20:33.271 "num_base_bdevs_discovered": 3, 00:20:33.271 "num_base_bdevs_operational": 3, 00:20:33.271 "base_bdevs_list": [ 00:20:33.271 { 00:20:33.271 "name": null, 00:20:33.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.271 "is_configured": false, 00:20:33.271 "data_offset": 0, 00:20:33.271 "data_size": 65536 00:20:33.271 }, 00:20:33.271 { 00:20:33.271 "name": "BaseBdev2", 00:20:33.271 "uuid": "97abe3d1-9f5a-5ea0-9b9c-01b07dfc1289", 00:20:33.271 "is_configured": true, 00:20:33.271 "data_offset": 0, 00:20:33.271 "data_size": 65536 00:20:33.271 }, 00:20:33.271 { 00:20:33.271 "name": "BaseBdev3", 00:20:33.271 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:33.271 "is_configured": true, 00:20:33.271 "data_offset": 0, 00:20:33.271 "data_size": 65536 00:20:33.271 }, 00:20:33.271 { 00:20:33.271 "name": "BaseBdev4", 00:20:33.271 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:33.271 "is_configured": true, 00:20:33.271 "data_offset": 0, 00:20:33.271 "data_size": 65536 00:20:33.271 } 00:20:33.271 ] 00:20:33.271 }' 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.271 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.529 [2024-12-06 13:15:20.307397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:33.529 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:33.529 Zero copy mechanism will not be used. 00:20:33.529 Running I/O for 60 seconds... 00:20:33.786 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:33.786 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.786 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:33.786 [2024-12-06 13:15:20.706228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:33.786 13:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.786 13:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:33.786 [2024-12-06 13:15:20.778033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:20:33.786 [2024-12-06 13:15:20.781165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:34.044 [2024-12-06 13:15:20.936502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:34.301 [2024-12-06 13:15:21.063634] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:34.301 [2024-12-06 13:15:21.064944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:34.559 138.00 IOPS, 414.00 MiB/s [2024-12-06T13:15:21.575Z] [2024-12-06 13:15:21.435092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.817 [2024-12-06 13:15:21.794608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:34.817 [2024-12-06 13:15:21.796837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.817 "name": "raid_bdev1", 00:20:34.817 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:34.817 "strip_size_kb": 0, 00:20:34.817 "state": "online", 00:20:34.817 "raid_level": "raid1", 00:20:34.817 "superblock": false, 00:20:34.817 "num_base_bdevs": 4, 00:20:34.817 "num_base_bdevs_discovered": 4, 00:20:34.817 "num_base_bdevs_operational": 4, 00:20:34.817 "process": { 00:20:34.817 "type": "rebuild", 00:20:34.817 "target": "spare", 00:20:34.817 "progress": { 00:20:34.817 "blocks": 12288, 00:20:34.817 "percent": 18 00:20:34.817 } 00:20:34.817 }, 00:20:34.817 "base_bdevs_list": [ 00:20:34.817 { 00:20:34.817 "name": "spare", 00:20:34.817 "uuid": "5222e140-4201-52ad-900d-141ab6aa5b5b", 00:20:34.817 "is_configured": true, 00:20:34.817 "data_offset": 0, 00:20:34.817 "data_size": 65536 00:20:34.817 }, 00:20:34.817 { 00:20:34.817 "name": "BaseBdev2", 00:20:34.817 "uuid": "97abe3d1-9f5a-5ea0-9b9c-01b07dfc1289", 00:20:34.817 "is_configured": true, 00:20:34.817 "data_offset": 0, 00:20:34.817 "data_size": 65536 00:20:34.817 }, 00:20:34.817 { 00:20:34.817 "name": "BaseBdev3", 00:20:34.817 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:34.817 "is_configured": true, 00:20:34.817 "data_offset": 0, 00:20:34.817 "data_size": 65536 00:20:34.817 }, 00:20:34.817 { 00:20:34.817 "name": "BaseBdev4", 00:20:34.817 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:34.817 "is_configured": true, 00:20:34.817 "data_offset": 0, 00:20:34.817 "data_size": 65536 00:20:34.817 } 00:20:34.817 ] 00:20:34.817 }' 00:20:34.817 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.075 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.075 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.075 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.075 13:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:35.075 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.075 13:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.075 [2024-12-06 13:15:21.935177] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:35.075 [2024-12-06 13:15:22.047252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:35.075 [2024-12-06 13:15:22.056939] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:35.075 [2024-12-06 13:15:22.062287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.075 [2024-12-06 13:15:22.062339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:35.075 [2024-12-06 13:15:22.062356] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:35.333 [2024-12-06 13:15:22.108134] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.333 "name": "raid_bdev1", 00:20:35.333 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:35.333 "strip_size_kb": 0, 00:20:35.333 "state": "online", 00:20:35.333 "raid_level": "raid1", 00:20:35.333 "superblock": false, 00:20:35.333 "num_base_bdevs": 4, 00:20:35.333 "num_base_bdevs_discovered": 3, 00:20:35.333 "num_base_bdevs_operational": 3, 00:20:35.333 "base_bdevs_list": [ 00:20:35.333 { 00:20:35.333 "name": null, 00:20:35.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.333 "is_configured": false, 00:20:35.333 "data_offset": 0, 00:20:35.333 "data_size": 65536 00:20:35.333 }, 00:20:35.333 { 00:20:35.333 "name": "BaseBdev2", 00:20:35.333 "uuid": "97abe3d1-9f5a-5ea0-9b9c-01b07dfc1289", 00:20:35.333 "is_configured": true, 00:20:35.333 "data_offset": 0, 00:20:35.333 "data_size": 65536 00:20:35.333 }, 00:20:35.333 { 00:20:35.333 "name": "BaseBdev3", 00:20:35.333 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:35.333 "is_configured": true, 00:20:35.333 "data_offset": 0, 00:20:35.333 "data_size": 65536 00:20:35.333 }, 00:20:35.333 { 00:20:35.333 "name": "BaseBdev4", 00:20:35.333 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:35.333 "is_configured": true, 00:20:35.333 "data_offset": 0, 00:20:35.333 "data_size": 65536 00:20:35.333 } 00:20:35.333 ] 00:20:35.333 }' 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.333 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.898 116.00 IOPS, 348.00 MiB/s [2024-12-06T13:15:22.914Z] 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.898 "name": "raid_bdev1", 00:20:35.898 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:35.898 "strip_size_kb": 0, 00:20:35.898 "state": "online", 00:20:35.898 "raid_level": "raid1", 00:20:35.898 "superblock": false, 00:20:35.898 "num_base_bdevs": 4, 00:20:35.898 "num_base_bdevs_discovered": 3, 00:20:35.898 "num_base_bdevs_operational": 3, 00:20:35.898 "base_bdevs_list": [ 00:20:35.898 { 00:20:35.898 "name": null, 00:20:35.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.898 "is_configured": false, 00:20:35.898 "data_offset": 0, 00:20:35.898 "data_size": 65536 00:20:35.898 }, 00:20:35.898 { 00:20:35.898 "name": "BaseBdev2", 00:20:35.898 "uuid": "97abe3d1-9f5a-5ea0-9b9c-01b07dfc1289", 00:20:35.898 "is_configured": true, 00:20:35.898 "data_offset": 0, 00:20:35.898 "data_size": 65536 00:20:35.898 }, 00:20:35.898 { 00:20:35.898 "name": "BaseBdev3", 00:20:35.898 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:35.898 "is_configured": true, 00:20:35.898 "data_offset": 0, 00:20:35.898 "data_size": 65536 00:20:35.898 }, 00:20:35.898 { 00:20:35.898 "name": "BaseBdev4", 00:20:35.898 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:35.898 "is_configured": true, 00:20:35.898 "data_offset": 0, 00:20:35.898 "data_size": 65536 00:20:35.898 } 00:20:35.898 ] 00:20:35.898 }' 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.898 [2024-12-06 13:15:22.824733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.898 13:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:36.156 [2024-12-06 13:15:22.916589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:36.156 [2024-12-06 13:15:22.919562] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:36.156 [2024-12-06 13:15:23.031817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:36.156 [2024-12-06 13:15:23.033111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:36.156 [2024-12-06 13:15:23.170292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:36.414 [2024-12-06 13:15:23.171729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:36.673 127.33 IOPS, 382.00 MiB/s [2024-12-06T13:15:23.689Z] [2024-12-06 13:15:23.527251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:36.673 [2024-12-06 13:15:23.529554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:36.955 [2024-12-06 13:15:23.795061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:36.955 [2024-12-06 13:15:23.795673] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.955 "name": "raid_bdev1", 00:20:36.955 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:36.955 "strip_size_kb": 0, 00:20:36.955 "state": "online", 00:20:36.955 "raid_level": "raid1", 00:20:36.955 "superblock": false, 00:20:36.955 "num_base_bdevs": 4, 00:20:36.955 "num_base_bdevs_discovered": 4, 00:20:36.955 "num_base_bdevs_operational": 4, 00:20:36.955 "process": { 00:20:36.955 "type": "rebuild", 00:20:36.955 "target": "spare", 00:20:36.955 "progress": { 00:20:36.955 "blocks": 10240, 00:20:36.955 "percent": 15 00:20:36.955 } 00:20:36.955 }, 00:20:36.955 "base_bdevs_list": [ 00:20:36.955 { 00:20:36.955 "name": "spare", 00:20:36.955 "uuid": "5222e140-4201-52ad-900d-141ab6aa5b5b", 00:20:36.955 "is_configured": true, 00:20:36.955 "data_offset": 0, 00:20:36.955 "data_size": 65536 00:20:36.955 }, 00:20:36.955 { 00:20:36.955 "name": "BaseBdev2", 00:20:36.955 "uuid": "97abe3d1-9f5a-5ea0-9b9c-01b07dfc1289", 00:20:36.955 "is_configured": true, 00:20:36.955 "data_offset": 0, 00:20:36.955 "data_size": 65536 00:20:36.955 }, 00:20:36.955 { 00:20:36.955 "name": "BaseBdev3", 00:20:36.955 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:36.955 "is_configured": true, 00:20:36.955 "data_offset": 0, 00:20:36.955 "data_size": 65536 00:20:36.955 }, 00:20:36.955 { 00:20:36.955 "name": "BaseBdev4", 00:20:36.955 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:36.955 "is_configured": true, 00:20:36.955 "data_offset": 0, 00:20:36.955 "data_size": 65536 00:20:36.955 } 00:20:36.955 ] 00:20:36.955 }' 00:20:36.955 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.214 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.214 13:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.214 [2024-12-06 13:15:24.058888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:37.214 [2024-12-06 13:15:24.069972] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:37.214 [2024-12-06 13:15:24.183632] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:20:37.214 [2024-12-06 13:15:24.183852] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.214 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.493 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.493 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.493 "name": "raid_bdev1", 00:20:37.493 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:37.493 "strip_size_kb": 0, 00:20:37.493 "state": "online", 00:20:37.493 "raid_level": "raid1", 00:20:37.493 "superblock": false, 00:20:37.493 "num_base_bdevs": 4, 00:20:37.493 "num_base_bdevs_discovered": 3, 00:20:37.493 "num_base_bdevs_operational": 3, 00:20:37.493 "process": { 00:20:37.493 "type": "rebuild", 00:20:37.493 "target": "spare", 00:20:37.493 "progress": { 00:20:37.493 "blocks": 14336, 00:20:37.493 "percent": 21 00:20:37.493 } 00:20:37.493 }, 00:20:37.493 "base_bdevs_list": [ 00:20:37.493 { 00:20:37.493 "name": "spare", 00:20:37.493 "uuid": "5222e140-4201-52ad-900d-141ab6aa5b5b", 00:20:37.493 "is_configured": true, 00:20:37.493 "data_offset": 0, 00:20:37.494 "data_size": 65536 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "name": null, 00:20:37.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.494 "is_configured": false, 00:20:37.494 "data_offset": 0, 00:20:37.494 "data_size": 65536 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "name": "BaseBdev3", 00:20:37.494 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:37.494 "is_configured": true, 00:20:37.494 "data_offset": 0, 00:20:37.494 "data_size": 65536 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "name": "BaseBdev4", 00:20:37.494 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:37.494 "is_configured": true, 00:20:37.494 "data_offset": 0, 00:20:37.494 "data_size": 65536 00:20:37.494 } 00:20:37.494 ] 00:20:37.494 }' 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.494 119.25 IOPS, 357.75 MiB/s [2024-12-06T13:15:24.510Z] 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=535 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.494 "name": "raid_bdev1", 00:20:37.494 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:37.494 "strip_size_kb": 0, 00:20:37.494 "state": "online", 00:20:37.494 "raid_level": "raid1", 00:20:37.494 "superblock": false, 00:20:37.494 "num_base_bdevs": 4, 00:20:37.494 "num_base_bdevs_discovered": 3, 00:20:37.494 "num_base_bdevs_operational": 3, 00:20:37.494 "process": { 00:20:37.494 "type": "rebuild", 00:20:37.494 "target": "spare", 00:20:37.494 "progress": { 00:20:37.494 "blocks": 16384, 00:20:37.494 "percent": 25 00:20:37.494 } 00:20:37.494 }, 00:20:37.494 "base_bdevs_list": [ 00:20:37.494 { 00:20:37.494 "name": "spare", 00:20:37.494 "uuid": "5222e140-4201-52ad-900d-141ab6aa5b5b", 00:20:37.494 "is_configured": true, 00:20:37.494 "data_offset": 0, 00:20:37.494 "data_size": 65536 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "name": null, 00:20:37.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.494 "is_configured": false, 00:20:37.494 "data_offset": 0, 00:20:37.494 "data_size": 65536 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "name": "BaseBdev3", 00:20:37.494 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:37.494 "is_configured": true, 00:20:37.494 "data_offset": 0, 00:20:37.494 "data_size": 65536 00:20:37.494 }, 00:20:37.494 { 00:20:37.494 "name": "BaseBdev4", 00:20:37.494 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:37.494 "is_configured": true, 00:20:37.494 "data_offset": 0, 00:20:37.494 "data_size": 65536 00:20:37.494 } 00:20:37.494 ] 00:20:37.494 }' 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.494 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.762 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.762 13:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:37.762 [2024-12-06 13:15:24.553141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:38.021 [2024-12-06 13:15:24.810298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:38.287 [2024-12-06 13:15:25.201548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:38.544 [2024-12-06 13:15:25.320707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:38.544 [2024-12-06 13:15:25.321305] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:38.544 106.00 IOPS, 318.00 MiB/s [2024-12-06T13:15:25.560Z] 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:38.544 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.544 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.544 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.544 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.544 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.544 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.544 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.544 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.544 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:38.544 13:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.803 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.803 "name": "raid_bdev1", 00:20:38.803 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:38.803 "strip_size_kb": 0, 00:20:38.803 "state": "online", 00:20:38.803 "raid_level": "raid1", 00:20:38.803 "superblock": false, 00:20:38.803 "num_base_bdevs": 4, 00:20:38.803 "num_base_bdevs_discovered": 3, 00:20:38.803 "num_base_bdevs_operational": 3, 00:20:38.803 "process": { 00:20:38.803 "type": "rebuild", 00:20:38.803 "target": "spare", 00:20:38.803 "progress": { 00:20:38.803 "blocks": 30720, 00:20:38.803 "percent": 46 00:20:38.803 } 00:20:38.803 }, 00:20:38.803 "base_bdevs_list": [ 00:20:38.803 { 00:20:38.803 "name": "spare", 00:20:38.803 "uuid": "5222e140-4201-52ad-900d-141ab6aa5b5b", 00:20:38.803 "is_configured": true, 00:20:38.803 "data_offset": 0, 00:20:38.803 "data_size": 65536 00:20:38.803 }, 00:20:38.803 { 00:20:38.803 "name": null, 00:20:38.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.803 "is_configured": false, 00:20:38.803 "data_offset": 0, 00:20:38.803 "data_size": 65536 00:20:38.803 }, 00:20:38.803 { 00:20:38.803 "name": "BaseBdev3", 00:20:38.803 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:38.803 "is_configured": true, 00:20:38.803 "data_offset": 0, 00:20:38.803 "data_size": 65536 00:20:38.803 }, 00:20:38.803 { 00:20:38.803 "name": "BaseBdev4", 00:20:38.803 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:38.803 "is_configured": true, 00:20:38.803 "data_offset": 0, 00:20:38.803 "data_size": 65536 00:20:38.803 } 00:20:38.803 ] 00:20:38.803 }' 00:20:38.803 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.803 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.803 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.803 [2024-12-06 13:15:25.661559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:38.803 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.803 13:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:39.061 [2024-12-06 13:15:25.908831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:39.320 [2024-12-06 13:15:26.262613] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:39.838 95.50 IOPS, 286.50 MiB/s [2024-12-06T13:15:26.854Z] 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.838 "name": "raid_bdev1", 00:20:39.838 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:39.838 "strip_size_kb": 0, 00:20:39.838 "state": "online", 00:20:39.838 "raid_level": "raid1", 00:20:39.838 "superblock": false, 00:20:39.838 "num_base_bdevs": 4, 00:20:39.838 "num_base_bdevs_discovered": 3, 00:20:39.838 "num_base_bdevs_operational": 3, 00:20:39.838 "process": { 00:20:39.838 "type": "rebuild", 00:20:39.838 "target": "spare", 00:20:39.838 "progress": { 00:20:39.838 "blocks": 43008, 00:20:39.838 "percent": 65 00:20:39.838 } 00:20:39.838 }, 00:20:39.838 "base_bdevs_list": [ 00:20:39.838 { 00:20:39.838 "name": "spare", 00:20:39.838 "uuid": "5222e140-4201-52ad-900d-141ab6aa5b5b", 00:20:39.838 "is_configured": true, 00:20:39.838 "data_offset": 0, 00:20:39.838 "data_size": 65536 00:20:39.838 }, 00:20:39.838 { 00:20:39.838 "name": null, 00:20:39.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.838 "is_configured": false, 00:20:39.838 "data_offset": 0, 00:20:39.838 "data_size": 65536 00:20:39.838 }, 00:20:39.838 { 00:20:39.838 "name": "BaseBdev3", 00:20:39.838 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:39.838 "is_configured": true, 00:20:39.838 "data_offset": 0, 00:20:39.838 "data_size": 65536 00:20:39.838 }, 00:20:39.838 { 00:20:39.838 "name": "BaseBdev4", 00:20:39.838 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:39.838 "is_configured": true, 00:20:39.838 "data_offset": 0, 00:20:39.838 "data_size": 65536 00:20:39.838 } 00:20:39.838 ] 00:20:39.838 }' 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.838 [2024-12-06 13:15:26.765742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.838 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.097 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.097 13:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:40.919 87.71 IOPS, 263.14 MiB/s [2024-12-06T13:15:27.935Z] 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.919 "name": "raid_bdev1", 00:20:40.919 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:40.919 "strip_size_kb": 0, 00:20:40.919 "state": "online", 00:20:40.919 "raid_level": "raid1", 00:20:40.919 "superblock": false, 00:20:40.919 "num_base_bdevs": 4, 00:20:40.919 "num_base_bdevs_discovered": 3, 00:20:40.919 "num_base_bdevs_operational": 3, 00:20:40.919 "process": { 00:20:40.919 "type": "rebuild", 00:20:40.919 "target": "spare", 00:20:40.919 "progress": { 00:20:40.919 "blocks": 63488, 00:20:40.919 "percent": 96 00:20:40.919 } 00:20:40.919 }, 00:20:40.919 "base_bdevs_list": [ 00:20:40.919 { 00:20:40.919 "name": "spare", 00:20:40.919 "uuid": "5222e140-4201-52ad-900d-141ab6aa5b5b", 00:20:40.919 "is_configured": true, 00:20:40.919 "data_offset": 0, 00:20:40.919 "data_size": 65536 00:20:40.919 }, 00:20:40.919 { 00:20:40.919 "name": null, 00:20:40.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.919 "is_configured": false, 00:20:40.919 "data_offset": 0, 00:20:40.919 "data_size": 65536 00:20:40.919 }, 00:20:40.919 { 00:20:40.919 "name": "BaseBdev3", 00:20:40.919 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:40.919 "is_configured": true, 00:20:40.919 "data_offset": 0, 00:20:40.919 "data_size": 65536 00:20:40.919 }, 00:20:40.919 { 00:20:40.919 "name": "BaseBdev4", 00:20:40.919 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:40.919 "is_configured": true, 00:20:40.919 "data_offset": 0, 00:20:40.919 "data_size": 65536 00:20:40.919 } 00:20:40.919 ] 00:20:40.919 }' 00:20:40.919 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.919 [2024-12-06 13:15:27.927679] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:41.176 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:41.176 13:15:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.176 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:41.176 13:15:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:41.176 [2024-12-06 13:15:28.027648] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:41.176 [2024-12-06 13:15:28.031977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.036 80.75 IOPS, 242.25 MiB/s [2024-12-06T13:15:29.052Z] 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:42.036 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.036 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.036 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:42.036 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:42.036 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.036 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.036 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.036 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:42.036 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.309 "name": "raid_bdev1", 00:20:42.309 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:42.309 "strip_size_kb": 0, 00:20:42.309 "state": "online", 00:20:42.309 "raid_level": "raid1", 00:20:42.309 "superblock": false, 00:20:42.309 "num_base_bdevs": 4, 00:20:42.309 "num_base_bdevs_discovered": 3, 00:20:42.309 "num_base_bdevs_operational": 3, 00:20:42.309 "base_bdevs_list": [ 00:20:42.309 { 00:20:42.309 "name": "spare", 00:20:42.309 "uuid": "5222e140-4201-52ad-900d-141ab6aa5b5b", 00:20:42.309 "is_configured": true, 00:20:42.309 "data_offset": 0, 00:20:42.309 "data_size": 65536 00:20:42.309 }, 00:20:42.309 { 00:20:42.309 "name": null, 00:20:42.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.309 "is_configured": false, 00:20:42.309 "data_offset": 0, 00:20:42.309 "data_size": 65536 00:20:42.309 }, 00:20:42.309 { 00:20:42.309 "name": "BaseBdev3", 00:20:42.309 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:42.309 "is_configured": true, 00:20:42.309 "data_offset": 0, 00:20:42.309 "data_size": 65536 00:20:42.309 }, 00:20:42.309 { 00:20:42.309 "name": "BaseBdev4", 00:20:42.309 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:42.309 "is_configured": true, 00:20:42.309 "data_offset": 0, 00:20:42.309 "data_size": 65536 00:20:42.309 } 00:20:42.309 ] 00:20:42.309 }' 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.309 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.309 "name": "raid_bdev1", 00:20:42.309 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:42.309 "strip_size_kb": 0, 00:20:42.309 "state": "online", 00:20:42.309 "raid_level": "raid1", 00:20:42.309 "superblock": false, 00:20:42.309 "num_base_bdevs": 4, 00:20:42.309 "num_base_bdevs_discovered": 3, 00:20:42.309 "num_base_bdevs_operational": 3, 00:20:42.309 "base_bdevs_list": [ 00:20:42.309 { 00:20:42.309 "name": "spare", 00:20:42.310 "uuid": "5222e140-4201-52ad-900d-141ab6aa5b5b", 00:20:42.310 "is_configured": true, 00:20:42.310 "data_offset": 0, 00:20:42.310 "data_size": 65536 00:20:42.310 }, 00:20:42.310 { 00:20:42.310 "name": null, 00:20:42.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.310 "is_configured": false, 00:20:42.310 "data_offset": 0, 00:20:42.310 "data_size": 65536 00:20:42.310 }, 00:20:42.310 { 00:20:42.310 "name": "BaseBdev3", 00:20:42.310 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:42.310 "is_configured": true, 00:20:42.310 "data_offset": 0, 00:20:42.310 "data_size": 65536 00:20:42.310 }, 00:20:42.310 { 00:20:42.310 "name": "BaseBdev4", 00:20:42.310 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:42.310 "is_configured": true, 00:20:42.310 "data_offset": 0, 00:20:42.310 "data_size": 65536 00:20:42.310 } 00:20:42.310 ] 00:20:42.310 }' 00:20:42.310 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.310 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:42.310 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.571 76.89 IOPS, 230.67 MiB/s [2024-12-06T13:15:29.587Z] 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.571 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.571 "name": "raid_bdev1", 00:20:42.571 "uuid": "2ec6ae5c-2393-44a9-8142-3bc317e2286a", 00:20:42.571 "strip_size_kb": 0, 00:20:42.571 "state": "online", 00:20:42.571 "raid_level": "raid1", 00:20:42.571 "superblock": false, 00:20:42.571 "num_base_bdevs": 4, 00:20:42.571 "num_base_bdevs_discovered": 3, 00:20:42.571 "num_base_bdevs_operational": 3, 00:20:42.571 "base_bdevs_list": [ 00:20:42.571 { 00:20:42.571 "name": "spare", 00:20:42.571 "uuid": "5222e140-4201-52ad-900d-141ab6aa5b5b", 00:20:42.571 "is_configured": true, 00:20:42.571 "data_offset": 0, 00:20:42.571 "data_size": 65536 00:20:42.571 }, 00:20:42.571 { 00:20:42.571 "name": null, 00:20:42.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.571 "is_configured": false, 00:20:42.571 "data_offset": 0, 00:20:42.571 "data_size": 65536 00:20:42.571 }, 00:20:42.571 { 00:20:42.571 "name": "BaseBdev3", 00:20:42.571 "uuid": "b04ad229-412d-546d-aca6-67eff53fd8c9", 00:20:42.571 "is_configured": true, 00:20:42.571 "data_offset": 0, 00:20:42.571 "data_size": 65536 00:20:42.571 }, 00:20:42.571 { 00:20:42.571 "name": "BaseBdev4", 00:20:42.571 "uuid": "532856f7-082e-5720-bb6d-03040e02a732", 00:20:42.571 "is_configured": true, 00:20:42.571 "data_offset": 0, 00:20:42.571 "data_size": 65536 00:20:42.571 } 00:20:42.571 ] 00:20:42.571 }' 00:20:42.572 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.572 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.138 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:43.138 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.138 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.138 [2024-12-06 13:15:29.906818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.138 [2024-12-06 13:15:29.906995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:43.138 00:20:43.138 Latency(us) 00:20:43.138 [2024-12-06T13:15:30.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.138 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:43.138 raid_bdev1 : 9.63 75.30 225.90 0.00 0.00 19281.38 283.00 119156.36 00:20:43.138 [2024-12-06T13:15:30.154Z] =================================================================================================================== 00:20:43.138 [2024-12-06T13:15:30.154Z] Total : 75.30 225.90 0.00 0.00 19281.38 283.00 119156.36 00:20:43.138 [2024-12-06 13:15:29.957222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.138 [2024-12-06 13:15:29.957300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.138 [2024-12-06 13:15:29.957427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.138 [2024-12-06 13:15:29.957444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:43.138 { 00:20:43.138 "results": [ 00:20:43.138 { 00:20:43.138 "job": "raid_bdev1", 00:20:43.138 "core_mask": "0x1", 00:20:43.138 "workload": "randrw", 00:20:43.138 "percentage": 50, 00:20:43.138 "status": "finished", 00:20:43.138 "queue_depth": 2, 00:20:43.138 "io_size": 3145728, 00:20:43.138 "runtime": 9.628155, 00:20:43.138 "iops": 75.2999925738628, 00:20:43.138 "mibps": 225.89997772158839, 00:20:43.138 "io_failed": 0, 00:20:43.138 "io_timeout": 0, 00:20:43.138 "avg_latency_us": 19281.375578683383, 00:20:43.138 "min_latency_us": 282.99636363636364, 00:20:43.138 "max_latency_us": 119156.36363636363 00:20:43.138 } 00:20:43.138 ], 00:20:43.138 "core_count": 1 00:20:43.138 } 00:20:43.138 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.138 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.138 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.138 13:15:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:43.138 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.138 13:15:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:43.138 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:43.396 /dev/nbd0 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.396 1+0 records in 00:20:43.396 1+0 records out 00:20:43.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289724 s, 14.1 MB/s 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:43.396 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:43.961 /dev/nbd1 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.961 1+0 records in 00:20:43.961 1+0 records out 00:20:43.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398747 s, 10.3 MB/s 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:43.961 13:15:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.526 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:44.526 /dev/nbd1 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.784 1+0 records in 00:20:44.784 1+0 records out 00:20:44.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351807 s, 11.6 MB/s 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:44.784 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.042 13:15:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:45.300 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:45.300 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:45.300 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:45.300 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.300 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.300 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79391 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79391 ']' 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79391 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79391 00:20:45.559 killing process with pid 79391 00:20:45.559 Received shutdown signal, test time was about 12.039569 seconds 00:20:45.559 00:20:45.559 Latency(us) 00:20:45.559 [2024-12-06T13:15:32.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.559 [2024-12-06T13:15:32.575Z] =================================================================================================================== 00:20:45.559 [2024-12-06T13:15:32.575Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79391' 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79391 00:20:45.559 [2024-12-06 13:15:32.350026] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:45.559 13:15:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79391 00:20:45.817 [2024-12-06 13:15:32.753748] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:47.275 13:15:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:47.275 00:20:47.275 real 0m15.904s 00:20:47.275 user 0m20.713s 00:20:47.275 sys 0m1.975s 00:20:47.275 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.275 ************************************ 00:20:47.275 END TEST raid_rebuild_test_io 00:20:47.275 ************************************ 00:20:47.275 13:15:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:47.275 13:15:33 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:20:47.275 13:15:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:47.275 13:15:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.275 13:15:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:47.275 ************************************ 00:20:47.275 START TEST raid_rebuild_test_sb_io 00:20:47.275 ************************************ 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:47.275 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79834 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79834 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79834 ']' 00:20:47.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.276 13:15:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:47.276 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:47.276 Zero copy mechanism will not be used. 00:20:47.276 [2024-12-06 13:15:34.145930] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:20:47.276 [2024-12-06 13:15:34.146145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79834 ] 00:20:47.535 [2024-12-06 13:15:34.331048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.535 [2024-12-06 13:15:34.480805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.793 [2024-12-06 13:15:34.709959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:47.793 [2024-12-06 13:15:34.710330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.360 BaseBdev1_malloc 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.360 [2024-12-06 13:15:35.166519] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:48.360 [2024-12-06 13:15:35.166604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.360 [2024-12-06 13:15:35.166642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:48.360 [2024-12-06 13:15:35.166662] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.360 [2024-12-06 13:15:35.169738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.360 [2024-12-06 13:15:35.169788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:48.360 BaseBdev1 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.360 BaseBdev2_malloc 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.360 [2024-12-06 13:15:35.223982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:48.360 [2024-12-06 13:15:35.224087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.360 [2024-12-06 13:15:35.224120] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:48.360 [2024-12-06 13:15:35.224139] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.360 [2024-12-06 13:15:35.227278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.360 [2024-12-06 13:15:35.227330] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:48.360 BaseBdev2 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.360 BaseBdev3_malloc 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.360 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.361 [2024-12-06 13:15:35.290507] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:48.361 [2024-12-06 13:15:35.290615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.361 [2024-12-06 13:15:35.290650] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:48.361 [2024-12-06 13:15:35.290670] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.361 [2024-12-06 13:15:35.293712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.361 [2024-12-06 13:15:35.293777] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:48.361 BaseBdev3 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.361 BaseBdev4_malloc 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.361 [2024-12-06 13:15:35.343000] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:48.361 [2024-12-06 13:15:35.343084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.361 [2024-12-06 13:15:35.343117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:48.361 [2024-12-06 13:15:35.343137] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.361 [2024-12-06 13:15:35.346090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.361 [2024-12-06 13:15:35.346158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:48.361 BaseBdev4 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.361 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.619 spare_malloc 00:20:48.619 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.620 spare_delay 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.620 [2024-12-06 13:15:35.403925] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:48.620 [2024-12-06 13:15:35.404168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.620 [2024-12-06 13:15:35.404207] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:48.620 [2024-12-06 13:15:35.404226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.620 [2024-12-06 13:15:35.407270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.620 [2024-12-06 13:15:35.407455] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:48.620 spare 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.620 [2024-12-06 13:15:35.412114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:48.620 [2024-12-06 13:15:35.414833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:48.620 [2024-12-06 13:15:35.414924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:48.620 [2024-12-06 13:15:35.415009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:48.620 [2024-12-06 13:15:35.415305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:48.620 [2024-12-06 13:15:35.415328] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:48.620 [2024-12-06 13:15:35.415703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:48.620 [2024-12-06 13:15:35.415950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:48.620 [2024-12-06 13:15:35.415974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:48.620 [2024-12-06 13:15:35.416209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.620 "name": "raid_bdev1", 00:20:48.620 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:48.620 "strip_size_kb": 0, 00:20:48.620 "state": "online", 00:20:48.620 "raid_level": "raid1", 00:20:48.620 "superblock": true, 00:20:48.620 "num_base_bdevs": 4, 00:20:48.620 "num_base_bdevs_discovered": 4, 00:20:48.620 "num_base_bdevs_operational": 4, 00:20:48.620 "base_bdevs_list": [ 00:20:48.620 { 00:20:48.620 "name": "BaseBdev1", 00:20:48.620 "uuid": "2e42f975-79bc-50c3-8de8-02427e122d3b", 00:20:48.620 "is_configured": true, 00:20:48.620 "data_offset": 2048, 00:20:48.620 "data_size": 63488 00:20:48.620 }, 00:20:48.620 { 00:20:48.620 "name": "BaseBdev2", 00:20:48.620 "uuid": "addcfc1a-070d-5bed-9706-8c4b82c05b90", 00:20:48.620 "is_configured": true, 00:20:48.620 "data_offset": 2048, 00:20:48.620 "data_size": 63488 00:20:48.620 }, 00:20:48.620 { 00:20:48.620 "name": "BaseBdev3", 00:20:48.620 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:48.620 "is_configured": true, 00:20:48.620 "data_offset": 2048, 00:20:48.620 "data_size": 63488 00:20:48.620 }, 00:20:48.620 { 00:20:48.620 "name": "BaseBdev4", 00:20:48.620 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:48.620 "is_configured": true, 00:20:48.620 "data_offset": 2048, 00:20:48.620 "data_size": 63488 00:20:48.620 } 00:20:48.620 ] 00:20:48.620 }' 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.620 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:49.186 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:49.186 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:49.186 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.186 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:49.186 [2024-12-06 13:15:35.928837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:49.186 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.186 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:49.186 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.186 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.186 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:49.186 13:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:49.186 [2024-12-06 13:15:36.048370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.186 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.186 "name": "raid_bdev1", 00:20:49.186 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:49.186 "strip_size_kb": 0, 00:20:49.186 "state": "online", 00:20:49.186 "raid_level": "raid1", 00:20:49.186 "superblock": true, 00:20:49.186 "num_base_bdevs": 4, 00:20:49.186 "num_base_bdevs_discovered": 3, 00:20:49.186 "num_base_bdevs_operational": 3, 00:20:49.186 "base_bdevs_list": [ 00:20:49.186 { 00:20:49.186 "name": null, 00:20:49.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.186 "is_configured": false, 00:20:49.186 "data_offset": 0, 00:20:49.186 "data_size": 63488 00:20:49.186 }, 00:20:49.186 { 00:20:49.186 "name": "BaseBdev2", 00:20:49.186 "uuid": "addcfc1a-070d-5bed-9706-8c4b82c05b90", 00:20:49.186 "is_configured": true, 00:20:49.186 "data_offset": 2048, 00:20:49.186 "data_size": 63488 00:20:49.186 }, 00:20:49.186 { 00:20:49.186 "name": "BaseBdev3", 00:20:49.186 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:49.187 "is_configured": true, 00:20:49.187 "data_offset": 2048, 00:20:49.187 "data_size": 63488 00:20:49.187 }, 00:20:49.187 { 00:20:49.187 "name": "BaseBdev4", 00:20:49.187 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:49.187 "is_configured": true, 00:20:49.187 "data_offset": 2048, 00:20:49.187 "data_size": 63488 00:20:49.187 } 00:20:49.187 ] 00:20:49.187 }' 00:20:49.187 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.187 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:49.187 [2024-12-06 13:15:36.185426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:49.187 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:49.187 Zero copy mechanism will not be used. 00:20:49.187 Running I/O for 60 seconds... 00:20:49.753 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.753 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.753 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:49.753 [2024-12-06 13:15:36.586480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.753 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.753 13:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:49.753 [2024-12-06 13:15:36.703971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:20:49.753 [2024-12-06 13:15:36.706926] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.011 [2024-12-06 13:15:36.820354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:50.011 [2024-12-06 13:15:36.822506] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:50.269 [2024-12-06 13:15:37.034809] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:50.269 [2024-12-06 13:15:37.035312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:50.659 110.00 IOPS, 330.00 MiB/s [2024-12-06T13:15:37.675Z] [2024-12-06 13:15:37.366898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:50.659 [2024-12-06 13:15:37.369103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:50.659 [2024-12-06 13:15:37.595282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:50.659 [2024-12-06 13:15:37.596496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:50.659 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.659 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.659 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.659 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.659 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.659 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.659 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.659 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.659 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:50.917 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.917 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.917 "name": "raid_bdev1", 00:20:50.917 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:50.917 "strip_size_kb": 0, 00:20:50.917 "state": "online", 00:20:50.917 "raid_level": "raid1", 00:20:50.917 "superblock": true, 00:20:50.917 "num_base_bdevs": 4, 00:20:50.917 "num_base_bdevs_discovered": 4, 00:20:50.917 "num_base_bdevs_operational": 4, 00:20:50.917 "process": { 00:20:50.917 "type": "rebuild", 00:20:50.917 "target": "spare", 00:20:50.917 "progress": { 00:20:50.917 "blocks": 10240, 00:20:50.917 "percent": 16 00:20:50.917 } 00:20:50.917 }, 00:20:50.917 "base_bdevs_list": [ 00:20:50.917 { 00:20:50.917 "name": "spare", 00:20:50.917 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:20:50.917 "is_configured": true, 00:20:50.917 "data_offset": 2048, 00:20:50.917 "data_size": 63488 00:20:50.917 }, 00:20:50.917 { 00:20:50.917 "name": "BaseBdev2", 00:20:50.917 "uuid": "addcfc1a-070d-5bed-9706-8c4b82c05b90", 00:20:50.917 "is_configured": true, 00:20:50.917 "data_offset": 2048, 00:20:50.917 "data_size": 63488 00:20:50.917 }, 00:20:50.917 { 00:20:50.917 "name": "BaseBdev3", 00:20:50.917 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:50.917 "is_configured": true, 00:20:50.917 "data_offset": 2048, 00:20:50.917 "data_size": 63488 00:20:50.917 }, 00:20:50.917 { 00:20:50.917 "name": "BaseBdev4", 00:20:50.917 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:50.917 "is_configured": true, 00:20:50.917 "data_offset": 2048, 00:20:50.917 "data_size": 63488 00:20:50.917 } 00:20:50.917 ] 00:20:50.917 }' 00:20:50.917 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.917 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.917 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.917 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.917 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:50.917 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.917 13:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:50.917 [2024-12-06 13:15:37.847164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.175 [2024-12-06 13:15:37.939833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:51.176 [2024-12-06 13:15:38.059981] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:51.176 [2024-12-06 13:15:38.077597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.176 [2024-12-06 13:15:38.077923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:51.176 [2024-12-06 13:15:38.077986] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:51.176 [2024-12-06 13:15:38.095242] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.176 "name": "raid_bdev1", 00:20:51.176 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:51.176 "strip_size_kb": 0, 00:20:51.176 "state": "online", 00:20:51.176 "raid_level": "raid1", 00:20:51.176 "superblock": true, 00:20:51.176 "num_base_bdevs": 4, 00:20:51.176 "num_base_bdevs_discovered": 3, 00:20:51.176 "num_base_bdevs_operational": 3, 00:20:51.176 "base_bdevs_list": [ 00:20:51.176 { 00:20:51.176 "name": null, 00:20:51.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.176 "is_configured": false, 00:20:51.176 "data_offset": 0, 00:20:51.176 "data_size": 63488 00:20:51.176 }, 00:20:51.176 { 00:20:51.176 "name": "BaseBdev2", 00:20:51.176 "uuid": "addcfc1a-070d-5bed-9706-8c4b82c05b90", 00:20:51.176 "is_configured": true, 00:20:51.176 "data_offset": 2048, 00:20:51.176 "data_size": 63488 00:20:51.176 }, 00:20:51.176 { 00:20:51.176 "name": "BaseBdev3", 00:20:51.176 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:51.176 "is_configured": true, 00:20:51.176 "data_offset": 2048, 00:20:51.176 "data_size": 63488 00:20:51.176 }, 00:20:51.176 { 00:20:51.176 "name": "BaseBdev4", 00:20:51.176 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:51.176 "is_configured": true, 00:20:51.176 "data_offset": 2048, 00:20:51.176 "data_size": 63488 00:20:51.176 } 00:20:51.176 ] 00:20:51.176 }' 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.176 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.691 80.50 IOPS, 241.50 MiB/s [2024-12-06T13:15:38.707Z] 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.691 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.691 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:51.691 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:51.691 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.691 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.691 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.691 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.691 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.691 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.691 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.691 "name": "raid_bdev1", 00:20:51.691 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:51.691 "strip_size_kb": 0, 00:20:51.691 "state": "online", 00:20:51.691 "raid_level": "raid1", 00:20:51.691 "superblock": true, 00:20:51.691 "num_base_bdevs": 4, 00:20:51.691 "num_base_bdevs_discovered": 3, 00:20:51.691 "num_base_bdevs_operational": 3, 00:20:51.691 "base_bdevs_list": [ 00:20:51.691 { 00:20:51.691 "name": null, 00:20:51.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.691 "is_configured": false, 00:20:51.691 "data_offset": 0, 00:20:51.691 "data_size": 63488 00:20:51.691 }, 00:20:51.691 { 00:20:51.691 "name": "BaseBdev2", 00:20:51.691 "uuid": "addcfc1a-070d-5bed-9706-8c4b82c05b90", 00:20:51.691 "is_configured": true, 00:20:51.691 "data_offset": 2048, 00:20:51.691 "data_size": 63488 00:20:51.691 }, 00:20:51.691 { 00:20:51.691 "name": "BaseBdev3", 00:20:51.691 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:51.691 "is_configured": true, 00:20:51.691 "data_offset": 2048, 00:20:51.691 "data_size": 63488 00:20:51.691 }, 00:20:51.691 { 00:20:51.691 "name": "BaseBdev4", 00:20:51.691 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:51.691 "is_configured": true, 00:20:51.691 "data_offset": 2048, 00:20:51.691 "data_size": 63488 00:20:51.691 } 00:20:51.691 ] 00:20:51.691 }' 00:20:51.692 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.949 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:51.949 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.949 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:51.949 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:51.949 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.949 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:51.949 [2024-12-06 13:15:38.805454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.949 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.949 13:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:51.949 [2024-12-06 13:15:38.868886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:51.949 [2024-12-06 13:15:38.871828] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.207 [2024-12-06 13:15:39.004033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:52.207 [2024-12-06 13:15:39.004976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:52.464 114.00 IOPS, 342.00 MiB/s [2024-12-06T13:15:39.480Z] [2024-12-06 13:15:39.255674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:52.464 [2024-12-06 13:15:39.256379] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:52.721 [2024-12-06 13:15:39.603896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:52.721 [2024-12-06 13:15:39.604822] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:52.979 [2024-12-06 13:15:39.754382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:52.979 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.979 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.979 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.979 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.979 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.979 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.979 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.979 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.979 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:52.979 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.979 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.979 "name": "raid_bdev1", 00:20:52.979 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:52.979 "strip_size_kb": 0, 00:20:52.979 "state": "online", 00:20:52.979 "raid_level": "raid1", 00:20:52.979 "superblock": true, 00:20:52.979 "num_base_bdevs": 4, 00:20:52.979 "num_base_bdevs_discovered": 4, 00:20:52.979 "num_base_bdevs_operational": 4, 00:20:52.979 "process": { 00:20:52.979 "type": "rebuild", 00:20:52.979 "target": "spare", 00:20:52.979 "progress": { 00:20:52.979 "blocks": 10240, 00:20:52.979 "percent": 16 00:20:52.979 } 00:20:52.979 }, 00:20:52.979 "base_bdevs_list": [ 00:20:52.979 { 00:20:52.979 "name": "spare", 00:20:52.979 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:20:52.979 "is_configured": true, 00:20:52.979 "data_offset": 2048, 00:20:52.979 "data_size": 63488 00:20:52.979 }, 00:20:52.979 { 00:20:52.979 "name": "BaseBdev2", 00:20:52.979 "uuid": "addcfc1a-070d-5bed-9706-8c4b82c05b90", 00:20:52.979 "is_configured": true, 00:20:52.979 "data_offset": 2048, 00:20:52.980 "data_size": 63488 00:20:52.980 }, 00:20:52.980 { 00:20:52.980 "name": "BaseBdev3", 00:20:52.980 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:52.980 "is_configured": true, 00:20:52.980 "data_offset": 2048, 00:20:52.980 "data_size": 63488 00:20:52.980 }, 00:20:52.980 { 00:20:52.980 "name": "BaseBdev4", 00:20:52.980 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:52.980 "is_configured": true, 00:20:52.980 "data_offset": 2048, 00:20:52.980 "data_size": 63488 00:20:52.980 } 00:20:52.980 ] 00:20:52.980 }' 00:20:52.980 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.980 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.980 13:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.237 [2024-12-06 13:15:39.993078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:53.237 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.237 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:53.237 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:53.237 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:53.237 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:53.237 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:53.237 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:53.237 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:53.237 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.237 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:53.237 [2024-12-06 13:15:40.018911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:53.499 109.25 IOPS, 327.75 MiB/s [2024-12-06T13:15:40.515Z] [2024-12-06 13:15:40.329451] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:20:53.499 [2024-12-06 13:15:40.329857] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.499 "name": "raid_bdev1", 00:20:53.499 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:53.499 "strip_size_kb": 0, 00:20:53.499 "state": "online", 00:20:53.499 "raid_level": "raid1", 00:20:53.499 "superblock": true, 00:20:53.499 "num_base_bdevs": 4, 00:20:53.499 "num_base_bdevs_discovered": 3, 00:20:53.499 "num_base_bdevs_operational": 3, 00:20:53.499 "process": { 00:20:53.499 "type": "rebuild", 00:20:53.499 "target": "spare", 00:20:53.499 "progress": { 00:20:53.499 "blocks": 16384, 00:20:53.499 "percent": 25 00:20:53.499 } 00:20:53.499 }, 00:20:53.499 "base_bdevs_list": [ 00:20:53.499 { 00:20:53.499 "name": "spare", 00:20:53.499 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:20:53.499 "is_configured": true, 00:20:53.499 "data_offset": 2048, 00:20:53.499 "data_size": 63488 00:20:53.499 }, 00:20:53.499 { 00:20:53.499 "name": null, 00:20:53.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.499 "is_configured": false, 00:20:53.499 "data_offset": 0, 00:20:53.499 "data_size": 63488 00:20:53.499 }, 00:20:53.499 { 00:20:53.499 "name": "BaseBdev3", 00:20:53.499 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:53.499 "is_configured": true, 00:20:53.499 "data_offset": 2048, 00:20:53.499 "data_size": 63488 00:20:53.499 }, 00:20:53.499 { 00:20:53.499 "name": "BaseBdev4", 00:20:53.499 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:53.499 "is_configured": true, 00:20:53.499 "data_offset": 2048, 00:20:53.499 "data_size": 63488 00:20:53.499 } 00:20:53.499 ] 00:20:53.499 }' 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=551 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.499 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:53.759 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.759 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.759 "name": "raid_bdev1", 00:20:53.759 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:53.759 "strip_size_kb": 0, 00:20:53.759 "state": "online", 00:20:53.759 "raid_level": "raid1", 00:20:53.759 "superblock": true, 00:20:53.759 "num_base_bdevs": 4, 00:20:53.759 "num_base_bdevs_discovered": 3, 00:20:53.759 "num_base_bdevs_operational": 3, 00:20:53.759 "process": { 00:20:53.759 "type": "rebuild", 00:20:53.759 "target": "spare", 00:20:53.759 "progress": { 00:20:53.759 "blocks": 16384, 00:20:53.759 "percent": 25 00:20:53.759 } 00:20:53.759 }, 00:20:53.759 "base_bdevs_list": [ 00:20:53.759 { 00:20:53.759 "name": "spare", 00:20:53.759 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:20:53.759 "is_configured": true, 00:20:53.759 "data_offset": 2048, 00:20:53.759 "data_size": 63488 00:20:53.759 }, 00:20:53.759 { 00:20:53.759 "name": null, 00:20:53.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.759 "is_configured": false, 00:20:53.759 "data_offset": 0, 00:20:53.759 "data_size": 63488 00:20:53.759 }, 00:20:53.759 { 00:20:53.759 "name": "BaseBdev3", 00:20:53.759 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:53.759 "is_configured": true, 00:20:53.759 "data_offset": 2048, 00:20:53.759 "data_size": 63488 00:20:53.759 }, 00:20:53.759 { 00:20:53.759 "name": "BaseBdev4", 00:20:53.759 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:53.759 "is_configured": true, 00:20:53.759 "data_offset": 2048, 00:20:53.759 "data_size": 63488 00:20:53.759 } 00:20:53.759 ] 00:20:53.759 }' 00:20:53.759 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.759 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.759 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.759 [2024-12-06 13:15:40.625422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:53.759 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.759 13:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:54.018 [2024-12-06 13:15:40.886714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:54.018 [2024-12-06 13:15:40.887380] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:54.277 101.40 IOPS, 304.20 MiB/s [2024-12-06T13:15:41.293Z] [2024-12-06 13:15:41.246514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:54.277 [2024-12-06 13:15:41.247714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:54.535 [2024-12-06 13:15:41.377742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:54.535 [2024-12-06 13:15:41.378216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.794 [2024-12-06 13:15:41.707589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.794 "name": "raid_bdev1", 00:20:54.794 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:54.794 "strip_size_kb": 0, 00:20:54.794 "state": "online", 00:20:54.794 "raid_level": "raid1", 00:20:54.794 "superblock": true, 00:20:54.794 "num_base_bdevs": 4, 00:20:54.794 "num_base_bdevs_discovered": 3, 00:20:54.794 "num_base_bdevs_operational": 3, 00:20:54.794 "process": { 00:20:54.794 "type": "rebuild", 00:20:54.794 "target": "spare", 00:20:54.794 "progress": { 00:20:54.794 "blocks": 30720, 00:20:54.794 "percent": 48 00:20:54.794 } 00:20:54.794 }, 00:20:54.794 "base_bdevs_list": [ 00:20:54.794 { 00:20:54.794 "name": "spare", 00:20:54.794 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:20:54.794 "is_configured": true, 00:20:54.794 "data_offset": 2048, 00:20:54.794 "data_size": 63488 00:20:54.794 }, 00:20:54.794 { 00:20:54.794 "name": null, 00:20:54.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.794 "is_configured": false, 00:20:54.794 "data_offset": 0, 00:20:54.794 "data_size": 63488 00:20:54.794 }, 00:20:54.794 { 00:20:54.794 "name": "BaseBdev3", 00:20:54.794 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:54.794 "is_configured": true, 00:20:54.794 "data_offset": 2048, 00:20:54.794 "data_size": 63488 00:20:54.794 }, 00:20:54.794 { 00:20:54.794 "name": "BaseBdev4", 00:20:54.794 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:54.794 "is_configured": true, 00:20:54.794 "data_offset": 2048, 00:20:54.794 "data_size": 63488 00:20:54.794 } 00:20:54.794 ] 00:20:54.794 }' 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.794 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.053 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.053 13:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:55.053 [2024-12-06 13:15:41.828374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:55.569 91.50 IOPS, 274.50 MiB/s [2024-12-06T13:15:42.585Z] [2024-12-06 13:15:42.330672] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:55.569 [2024-12-06 13:15:42.567670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:55.827 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:55.827 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.827 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.827 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.827 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.828 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.828 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.828 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.828 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.828 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:56.086 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.086 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.086 "name": "raid_bdev1", 00:20:56.086 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:56.086 "strip_size_kb": 0, 00:20:56.086 "state": "online", 00:20:56.086 "raid_level": "raid1", 00:20:56.086 "superblock": true, 00:20:56.086 "num_base_bdevs": 4, 00:20:56.086 "num_base_bdevs_discovered": 3, 00:20:56.086 "num_base_bdevs_operational": 3, 00:20:56.086 "process": { 00:20:56.086 "type": "rebuild", 00:20:56.086 "target": "spare", 00:20:56.086 "progress": { 00:20:56.087 "blocks": 49152, 00:20:56.087 "percent": 77 00:20:56.087 } 00:20:56.087 }, 00:20:56.087 "base_bdevs_list": [ 00:20:56.087 { 00:20:56.087 "name": "spare", 00:20:56.087 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:20:56.087 "is_configured": true, 00:20:56.087 "data_offset": 2048, 00:20:56.087 "data_size": 63488 00:20:56.087 }, 00:20:56.087 { 00:20:56.087 "name": null, 00:20:56.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.087 "is_configured": false, 00:20:56.087 "data_offset": 0, 00:20:56.087 "data_size": 63488 00:20:56.087 }, 00:20:56.087 { 00:20:56.087 "name": "BaseBdev3", 00:20:56.087 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:56.087 "is_configured": true, 00:20:56.087 "data_offset": 2048, 00:20:56.087 "data_size": 63488 00:20:56.087 }, 00:20:56.087 { 00:20:56.087 "name": "BaseBdev4", 00:20:56.087 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:56.087 "is_configured": true, 00:20:56.087 "data_offset": 2048, 00:20:56.087 "data_size": 63488 00:20:56.087 } 00:20:56.087 ] 00:20:56.087 }' 00:20:56.087 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.087 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:56.087 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.087 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.087 13:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:56.087 [2024-12-06 13:15:43.049217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:56.604 83.86 IOPS, 251.57 MiB/s [2024-12-06T13:15:43.620Z] [2024-12-06 13:15:43.519557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:56.863 [2024-12-06 13:15:43.860448] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:57.123 [2024-12-06 13:15:43.960351] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:57.123 [2024-12-06 13:15:43.965170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.123 "name": "raid_bdev1", 00:20:57.123 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:57.123 "strip_size_kb": 0, 00:20:57.123 "state": "online", 00:20:57.123 "raid_level": "raid1", 00:20:57.123 "superblock": true, 00:20:57.123 "num_base_bdevs": 4, 00:20:57.123 "num_base_bdevs_discovered": 3, 00:20:57.123 "num_base_bdevs_operational": 3, 00:20:57.123 "base_bdevs_list": [ 00:20:57.123 { 00:20:57.123 "name": "spare", 00:20:57.123 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:20:57.123 "is_configured": true, 00:20:57.123 "data_offset": 2048, 00:20:57.123 "data_size": 63488 00:20:57.123 }, 00:20:57.123 { 00:20:57.123 "name": null, 00:20:57.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.123 "is_configured": false, 00:20:57.123 "data_offset": 0, 00:20:57.123 "data_size": 63488 00:20:57.123 }, 00:20:57.123 { 00:20:57.123 "name": "BaseBdev3", 00:20:57.123 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:57.123 "is_configured": true, 00:20:57.123 "data_offset": 2048, 00:20:57.123 "data_size": 63488 00:20:57.123 }, 00:20:57.123 { 00:20:57.123 "name": "BaseBdev4", 00:20:57.123 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:57.123 "is_configured": true, 00:20:57.123 "data_offset": 2048, 00:20:57.123 "data_size": 63488 00:20:57.123 } 00:20:57.123 ] 00:20:57.123 }' 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:57.123 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.382 77.62 IOPS, 232.88 MiB/s [2024-12-06T13:15:44.398Z] 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.382 "name": "raid_bdev1", 00:20:57.382 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:57.382 "strip_size_kb": 0, 00:20:57.382 "state": "online", 00:20:57.382 "raid_level": "raid1", 00:20:57.382 "superblock": true, 00:20:57.382 "num_base_bdevs": 4, 00:20:57.382 "num_base_bdevs_discovered": 3, 00:20:57.382 "num_base_bdevs_operational": 3, 00:20:57.382 "base_bdevs_list": [ 00:20:57.382 { 00:20:57.382 "name": "spare", 00:20:57.382 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:20:57.382 "is_configured": true, 00:20:57.382 "data_offset": 2048, 00:20:57.382 "data_size": 63488 00:20:57.382 }, 00:20:57.382 { 00:20:57.382 "name": null, 00:20:57.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.382 "is_configured": false, 00:20:57.382 "data_offset": 0, 00:20:57.382 "data_size": 63488 00:20:57.382 }, 00:20:57.382 { 00:20:57.382 "name": "BaseBdev3", 00:20:57.382 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:57.382 "is_configured": true, 00:20:57.382 "data_offset": 2048, 00:20:57.382 "data_size": 63488 00:20:57.382 }, 00:20:57.382 { 00:20:57.382 "name": "BaseBdev4", 00:20:57.382 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:57.382 "is_configured": true, 00:20:57.382 "data_offset": 2048, 00:20:57.382 "data_size": 63488 00:20:57.382 } 00:20:57.382 ] 00:20:57.382 }' 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.382 "name": "raid_bdev1", 00:20:57.382 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:20:57.382 "strip_size_kb": 0, 00:20:57.382 "state": "online", 00:20:57.382 "raid_level": "raid1", 00:20:57.382 "superblock": true, 00:20:57.382 "num_base_bdevs": 4, 00:20:57.382 "num_base_bdevs_discovered": 3, 00:20:57.382 "num_base_bdevs_operational": 3, 00:20:57.382 "base_bdevs_list": [ 00:20:57.382 { 00:20:57.382 "name": "spare", 00:20:57.382 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:20:57.382 "is_configured": true, 00:20:57.382 "data_offset": 2048, 00:20:57.382 "data_size": 63488 00:20:57.382 }, 00:20:57.382 { 00:20:57.382 "name": null, 00:20:57.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.382 "is_configured": false, 00:20:57.382 "data_offset": 0, 00:20:57.382 "data_size": 63488 00:20:57.382 }, 00:20:57.382 { 00:20:57.382 "name": "BaseBdev3", 00:20:57.382 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:20:57.382 "is_configured": true, 00:20:57.382 "data_offset": 2048, 00:20:57.382 "data_size": 63488 00:20:57.382 }, 00:20:57.382 { 00:20:57.382 "name": "BaseBdev4", 00:20:57.382 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:20:57.382 "is_configured": true, 00:20:57.382 "data_offset": 2048, 00:20:57.382 "data_size": 63488 00:20:57.382 } 00:20:57.382 ] 00:20:57.382 }' 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.382 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:57.950 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:57.950 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.950 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:57.950 [2024-12-06 13:15:44.871069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:57.950 [2024-12-06 13:15:44.871127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:57.950 00:20:57.951 Latency(us) 00:20:57.951 [2024-12-06T13:15:44.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.951 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:57.951 raid_bdev1 : 8.77 73.09 219.26 0.00 0.00 17502.30 296.03 122016.12 00:20:57.951 [2024-12-06T13:15:44.967Z] =================================================================================================================== 00:20:57.951 [2024-12-06T13:15:44.967Z] Total : 73.09 219.26 0.00 0.00 17502.30 296.03 122016.12 00:20:58.211 [2024-12-06 13:15:44.980934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:58.211 [2024-12-06 13:15:44.981381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.211 { 00:20:58.211 "results": [ 00:20:58.211 { 00:20:58.211 "job": "raid_bdev1", 00:20:58.211 "core_mask": "0x1", 00:20:58.211 "workload": "randrw", 00:20:58.211 "percentage": 50, 00:20:58.211 "status": "finished", 00:20:58.211 "queue_depth": 2, 00:20:58.211 "io_size": 3145728, 00:20:58.211 "runtime": 8.770412, 00:20:58.211 "iops": 73.08664632858753, 00:20:58.211 "mibps": 219.2599389857626, 00:20:58.211 "io_failed": 0, 00:20:58.211 "io_timeout": 0, 00:20:58.211 "avg_latency_us": 17502.299202949936, 00:20:58.211 "min_latency_us": 296.0290909090909, 00:20:58.211 "max_latency_us": 122016.11636363636 00:20:58.211 } 00:20:58.211 ], 00:20:58.211 "core_count": 1 00:20:58.211 } 00:20:58.211 [2024-12-06 13:15:44.981622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:58.211 [2024-12-06 13:15:44.981657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:58.211 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.211 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.211 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.212 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:58.212 13:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:58.212 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:58.470 /dev/nbd0 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:58.470 1+0 records in 00:20:58.470 1+0 records out 00:20:58.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682718 s, 6.0 MB/s 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:58.470 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:58.727 /dev/nbd1 00:20:58.727 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:58.727 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:58.727 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:58.727 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:20:58.727 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:58.727 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:58.727 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:58.727 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:20:58.727 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:58.727 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:58.727 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:58.985 1+0 records in 00:20:58.985 1+0 records out 00:20:58.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579389 s, 7.1 MB/s 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.985 13:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:59.552 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:59.552 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:59.552 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:59.552 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:59.552 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.552 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:59.552 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:59.553 /dev/nbd1 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:59.553 1+0 records in 00:20:59.553 1+0 records out 00:20:59.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506945 s, 8.1 MB/s 00:20:59.553 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:59.811 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:00.069 13:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.327 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:00.327 [2024-12-06 13:15:47.302443] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:00.327 [2024-12-06 13:15:47.302572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.327 [2024-12-06 13:15:47.302615] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:00.327 [2024-12-06 13:15:47.302635] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.327 [2024-12-06 13:15:47.306062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.327 [2024-12-06 13:15:47.306282] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:00.327 [2024-12-06 13:15:47.306431] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:00.327 [2024-12-06 13:15:47.306551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:00.328 [2024-12-06 13:15:47.306861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:00.328 spare 00:21:00.328 [2024-12-06 13:15:47.307049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:00.328 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.328 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:00.328 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.328 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:00.586 [2024-12-06 13:15:47.407227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:00.586 [2024-12-06 13:15:47.407313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:00.586 [2024-12-06 13:15:47.407888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:21:00.586 [2024-12-06 13:15:47.408217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:00.586 [2024-12-06 13:15:47.408241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:00.586 [2024-12-06 13:15:47.408537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.586 "name": "raid_bdev1", 00:21:00.586 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:00.586 "strip_size_kb": 0, 00:21:00.586 "state": "online", 00:21:00.586 "raid_level": "raid1", 00:21:00.586 "superblock": true, 00:21:00.586 "num_base_bdevs": 4, 00:21:00.586 "num_base_bdevs_discovered": 3, 00:21:00.586 "num_base_bdevs_operational": 3, 00:21:00.586 "base_bdevs_list": [ 00:21:00.586 { 00:21:00.586 "name": "spare", 00:21:00.586 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:21:00.586 "is_configured": true, 00:21:00.586 "data_offset": 2048, 00:21:00.586 "data_size": 63488 00:21:00.586 }, 00:21:00.586 { 00:21:00.586 "name": null, 00:21:00.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.586 "is_configured": false, 00:21:00.586 "data_offset": 2048, 00:21:00.586 "data_size": 63488 00:21:00.586 }, 00:21:00.586 { 00:21:00.586 "name": "BaseBdev3", 00:21:00.586 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:00.586 "is_configured": true, 00:21:00.586 "data_offset": 2048, 00:21:00.586 "data_size": 63488 00:21:00.586 }, 00:21:00.586 { 00:21:00.586 "name": "BaseBdev4", 00:21:00.586 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:00.586 "is_configured": true, 00:21:00.586 "data_offset": 2048, 00:21:00.586 "data_size": 63488 00:21:00.586 } 00:21:00.586 ] 00:21:00.586 }' 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.586 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.152 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:01.152 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.152 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:01.152 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:01.152 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.152 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.152 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.152 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.152 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.152 13:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.152 "name": "raid_bdev1", 00:21:01.152 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:01.152 "strip_size_kb": 0, 00:21:01.152 "state": "online", 00:21:01.152 "raid_level": "raid1", 00:21:01.152 "superblock": true, 00:21:01.152 "num_base_bdevs": 4, 00:21:01.152 "num_base_bdevs_discovered": 3, 00:21:01.152 "num_base_bdevs_operational": 3, 00:21:01.152 "base_bdevs_list": [ 00:21:01.152 { 00:21:01.152 "name": "spare", 00:21:01.152 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:21:01.152 "is_configured": true, 00:21:01.152 "data_offset": 2048, 00:21:01.152 "data_size": 63488 00:21:01.152 }, 00:21:01.152 { 00:21:01.152 "name": null, 00:21:01.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.152 "is_configured": false, 00:21:01.152 "data_offset": 2048, 00:21:01.152 "data_size": 63488 00:21:01.152 }, 00:21:01.152 { 00:21:01.152 "name": "BaseBdev3", 00:21:01.152 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:01.152 "is_configured": true, 00:21:01.152 "data_offset": 2048, 00:21:01.152 "data_size": 63488 00:21:01.152 }, 00:21:01.152 { 00:21:01.152 "name": "BaseBdev4", 00:21:01.152 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:01.152 "is_configured": true, 00:21:01.152 "data_offset": 2048, 00:21:01.152 "data_size": 63488 00:21:01.152 } 00:21:01.152 ] 00:21:01.152 }' 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.152 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.152 [2024-12-06 13:15:48.163074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.410 "name": "raid_bdev1", 00:21:01.410 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:01.410 "strip_size_kb": 0, 00:21:01.410 "state": "online", 00:21:01.410 "raid_level": "raid1", 00:21:01.410 "superblock": true, 00:21:01.410 "num_base_bdevs": 4, 00:21:01.410 "num_base_bdevs_discovered": 2, 00:21:01.410 "num_base_bdevs_operational": 2, 00:21:01.410 "base_bdevs_list": [ 00:21:01.410 { 00:21:01.410 "name": null, 00:21:01.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.410 "is_configured": false, 00:21:01.410 "data_offset": 0, 00:21:01.410 "data_size": 63488 00:21:01.410 }, 00:21:01.410 { 00:21:01.410 "name": null, 00:21:01.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.410 "is_configured": false, 00:21:01.410 "data_offset": 2048, 00:21:01.410 "data_size": 63488 00:21:01.410 }, 00:21:01.410 { 00:21:01.410 "name": "BaseBdev3", 00:21:01.410 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:01.410 "is_configured": true, 00:21:01.410 "data_offset": 2048, 00:21:01.410 "data_size": 63488 00:21:01.410 }, 00:21:01.410 { 00:21:01.410 "name": "BaseBdev4", 00:21:01.410 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:01.410 "is_configured": true, 00:21:01.410 "data_offset": 2048, 00:21:01.410 "data_size": 63488 00:21:01.410 } 00:21:01.410 ] 00:21:01.410 }' 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.410 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.975 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:01.975 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.975 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.975 [2024-12-06 13:15:48.687333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.976 [2024-12-06 13:15:48.687653] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:01.976 [2024-12-06 13:15:48.687684] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:01.976 [2024-12-06 13:15:48.687745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.976 [2024-12-06 13:15:48.702416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:21:01.976 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.976 13:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:01.976 [2024-12-06 13:15:48.705390] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:02.907 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.907 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.907 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:02.907 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:02.907 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.907 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.907 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.907 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.907 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:02.907 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.907 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.907 "name": "raid_bdev1", 00:21:02.907 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:02.907 "strip_size_kb": 0, 00:21:02.907 "state": "online", 00:21:02.907 "raid_level": "raid1", 00:21:02.907 "superblock": true, 00:21:02.907 "num_base_bdevs": 4, 00:21:02.907 "num_base_bdevs_discovered": 3, 00:21:02.907 "num_base_bdevs_operational": 3, 00:21:02.908 "process": { 00:21:02.908 "type": "rebuild", 00:21:02.908 "target": "spare", 00:21:02.908 "progress": { 00:21:02.908 "blocks": 18432, 00:21:02.908 "percent": 29 00:21:02.908 } 00:21:02.908 }, 00:21:02.908 "base_bdevs_list": [ 00:21:02.908 { 00:21:02.908 "name": "spare", 00:21:02.908 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:21:02.908 "is_configured": true, 00:21:02.908 "data_offset": 2048, 00:21:02.908 "data_size": 63488 00:21:02.908 }, 00:21:02.908 { 00:21:02.908 "name": null, 00:21:02.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.908 "is_configured": false, 00:21:02.908 "data_offset": 2048, 00:21:02.908 "data_size": 63488 00:21:02.908 }, 00:21:02.908 { 00:21:02.908 "name": "BaseBdev3", 00:21:02.908 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:02.908 "is_configured": true, 00:21:02.908 "data_offset": 2048, 00:21:02.908 "data_size": 63488 00:21:02.908 }, 00:21:02.908 { 00:21:02.908 "name": "BaseBdev4", 00:21:02.908 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:02.908 "is_configured": true, 00:21:02.908 "data_offset": 2048, 00:21:02.908 "data_size": 63488 00:21:02.908 } 00:21:02.908 ] 00:21:02.908 }' 00:21:02.908 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.908 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.908 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.908 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.908 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:02.908 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.908 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:02.908 [2024-12-06 13:15:49.879698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:02.908 [2024-12-06 13:15:49.917741] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:02.908 [2024-12-06 13:15:49.918159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.908 [2024-12-06 13:15:49.918302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:02.908 [2024-12-06 13:15:49.918362] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.176 13:15:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.176 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.176 "name": "raid_bdev1", 00:21:03.176 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:03.176 "strip_size_kb": 0, 00:21:03.176 "state": "online", 00:21:03.176 "raid_level": "raid1", 00:21:03.176 "superblock": true, 00:21:03.176 "num_base_bdevs": 4, 00:21:03.176 "num_base_bdevs_discovered": 2, 00:21:03.176 "num_base_bdevs_operational": 2, 00:21:03.176 "base_bdevs_list": [ 00:21:03.176 { 00:21:03.176 "name": null, 00:21:03.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.176 "is_configured": false, 00:21:03.176 "data_offset": 0, 00:21:03.176 "data_size": 63488 00:21:03.176 }, 00:21:03.176 { 00:21:03.176 "name": null, 00:21:03.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.176 "is_configured": false, 00:21:03.176 "data_offset": 2048, 00:21:03.176 "data_size": 63488 00:21:03.176 }, 00:21:03.176 { 00:21:03.176 "name": "BaseBdev3", 00:21:03.176 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:03.176 "is_configured": true, 00:21:03.176 "data_offset": 2048, 00:21:03.176 "data_size": 63488 00:21:03.176 }, 00:21:03.176 { 00:21:03.176 "name": "BaseBdev4", 00:21:03.176 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:03.176 "is_configured": true, 00:21:03.176 "data_offset": 2048, 00:21:03.176 "data_size": 63488 00:21:03.176 } 00:21:03.176 ] 00:21:03.176 }' 00:21:03.176 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.176 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.744 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:03.744 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.744 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.744 [2024-12-06 13:15:50.464525] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:03.744 [2024-12-06 13:15:50.464620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.744 [2024-12-06 13:15:50.464665] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:03.744 [2024-12-06 13:15:50.464692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.744 [2024-12-06 13:15:50.465419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.744 [2024-12-06 13:15:50.465459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:03.744 [2024-12-06 13:15:50.465627] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:03.744 [2024-12-06 13:15:50.465655] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:03.744 [2024-12-06 13:15:50.465671] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:03.744 [2024-12-06 13:15:50.465712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.744 [2024-12-06 13:15:50.480545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:21:03.744 spare 00:21:03.744 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.744 13:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:03.744 [2024-12-06 13:15:50.483358] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.692 "name": "raid_bdev1", 00:21:04.692 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:04.692 "strip_size_kb": 0, 00:21:04.692 "state": "online", 00:21:04.692 "raid_level": "raid1", 00:21:04.692 "superblock": true, 00:21:04.692 "num_base_bdevs": 4, 00:21:04.692 "num_base_bdevs_discovered": 3, 00:21:04.692 "num_base_bdevs_operational": 3, 00:21:04.692 "process": { 00:21:04.692 "type": "rebuild", 00:21:04.692 "target": "spare", 00:21:04.692 "progress": { 00:21:04.692 "blocks": 20480, 00:21:04.692 "percent": 32 00:21:04.692 } 00:21:04.692 }, 00:21:04.692 "base_bdevs_list": [ 00:21:04.692 { 00:21:04.692 "name": "spare", 00:21:04.692 "uuid": "cbdd6734-dc5b-56a7-9a9c-ee23874b2269", 00:21:04.692 "is_configured": true, 00:21:04.692 "data_offset": 2048, 00:21:04.692 "data_size": 63488 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "name": null, 00:21:04.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.692 "is_configured": false, 00:21:04.692 "data_offset": 2048, 00:21:04.692 "data_size": 63488 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "name": "BaseBdev3", 00:21:04.692 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:04.692 "is_configured": true, 00:21:04.692 "data_offset": 2048, 00:21:04.692 "data_size": 63488 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "name": "BaseBdev4", 00:21:04.692 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:04.692 "is_configured": true, 00:21:04.692 "data_offset": 2048, 00:21:04.692 "data_size": 63488 00:21:04.692 } 00:21:04.692 ] 00:21:04.692 }' 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.692 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.692 [2024-12-06 13:15:51.657431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.950 [2024-12-06 13:15:51.695127] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:04.951 [2024-12-06 13:15:51.695427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.951 [2024-12-06 13:15:51.695744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.951 [2024-12-06 13:15:51.695803] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.951 "name": "raid_bdev1", 00:21:04.951 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:04.951 "strip_size_kb": 0, 00:21:04.951 "state": "online", 00:21:04.951 "raid_level": "raid1", 00:21:04.951 "superblock": true, 00:21:04.951 "num_base_bdevs": 4, 00:21:04.951 "num_base_bdevs_discovered": 2, 00:21:04.951 "num_base_bdevs_operational": 2, 00:21:04.951 "base_bdevs_list": [ 00:21:04.951 { 00:21:04.951 "name": null, 00:21:04.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.951 "is_configured": false, 00:21:04.951 "data_offset": 0, 00:21:04.951 "data_size": 63488 00:21:04.951 }, 00:21:04.951 { 00:21:04.951 "name": null, 00:21:04.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.951 "is_configured": false, 00:21:04.951 "data_offset": 2048, 00:21:04.951 "data_size": 63488 00:21:04.951 }, 00:21:04.951 { 00:21:04.951 "name": "BaseBdev3", 00:21:04.951 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:04.951 "is_configured": true, 00:21:04.951 "data_offset": 2048, 00:21:04.951 "data_size": 63488 00:21:04.951 }, 00:21:04.951 { 00:21:04.951 "name": "BaseBdev4", 00:21:04.951 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:04.951 "is_configured": true, 00:21:04.951 "data_offset": 2048, 00:21:04.951 "data_size": 63488 00:21:04.951 } 00:21:04.951 ] 00:21:04.951 }' 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.951 13:15:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.516 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:05.516 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:05.516 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:05.516 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:05.516 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:05.516 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.516 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.516 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.516 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.516 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.516 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:05.516 "name": "raid_bdev1", 00:21:05.516 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:05.516 "strip_size_kb": 0, 00:21:05.516 "state": "online", 00:21:05.516 "raid_level": "raid1", 00:21:05.516 "superblock": true, 00:21:05.516 "num_base_bdevs": 4, 00:21:05.516 "num_base_bdevs_discovered": 2, 00:21:05.516 "num_base_bdevs_operational": 2, 00:21:05.516 "base_bdevs_list": [ 00:21:05.516 { 00:21:05.516 "name": null, 00:21:05.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.516 "is_configured": false, 00:21:05.516 "data_offset": 0, 00:21:05.516 "data_size": 63488 00:21:05.516 }, 00:21:05.516 { 00:21:05.516 "name": null, 00:21:05.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.516 "is_configured": false, 00:21:05.516 "data_offset": 2048, 00:21:05.516 "data_size": 63488 00:21:05.516 }, 00:21:05.516 { 00:21:05.516 "name": "BaseBdev3", 00:21:05.516 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:05.517 "is_configured": true, 00:21:05.517 "data_offset": 2048, 00:21:05.517 "data_size": 63488 00:21:05.517 }, 00:21:05.517 { 00:21:05.517 "name": "BaseBdev4", 00:21:05.517 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:05.517 "is_configured": true, 00:21:05.517 "data_offset": 2048, 00:21:05.517 "data_size": 63488 00:21:05.517 } 00:21:05.517 ] 00:21:05.517 }' 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:05.517 [2024-12-06 13:15:52.418424] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:05.517 [2024-12-06 13:15:52.418542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.517 [2024-12-06 13:15:52.418586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:21:05.517 [2024-12-06 13:15:52.418609] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.517 [2024-12-06 13:15:52.419307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.517 [2024-12-06 13:15:52.419350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:05.517 [2024-12-06 13:15:52.419513] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:05.517 [2024-12-06 13:15:52.419538] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:05.517 [2024-12-06 13:15:52.419562] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:05.517 [2024-12-06 13:15:52.419578] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:05.517 BaseBdev1 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.517 13:15:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:06.495 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.753 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.753 "name": "raid_bdev1", 00:21:06.753 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:06.753 "strip_size_kb": 0, 00:21:06.753 "state": "online", 00:21:06.753 "raid_level": "raid1", 00:21:06.753 "superblock": true, 00:21:06.753 "num_base_bdevs": 4, 00:21:06.753 "num_base_bdevs_discovered": 2, 00:21:06.753 "num_base_bdevs_operational": 2, 00:21:06.753 "base_bdevs_list": [ 00:21:06.753 { 00:21:06.753 "name": null, 00:21:06.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.753 "is_configured": false, 00:21:06.753 "data_offset": 0, 00:21:06.753 "data_size": 63488 00:21:06.753 }, 00:21:06.753 { 00:21:06.753 "name": null, 00:21:06.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.753 "is_configured": false, 00:21:06.753 "data_offset": 2048, 00:21:06.753 "data_size": 63488 00:21:06.753 }, 00:21:06.753 { 00:21:06.753 "name": "BaseBdev3", 00:21:06.753 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:06.753 "is_configured": true, 00:21:06.753 "data_offset": 2048, 00:21:06.753 "data_size": 63488 00:21:06.753 }, 00:21:06.753 { 00:21:06.754 "name": "BaseBdev4", 00:21:06.754 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:06.754 "is_configured": true, 00:21:06.754 "data_offset": 2048, 00:21:06.754 "data_size": 63488 00:21:06.754 } 00:21:06.754 ] 00:21:06.754 }' 00:21:06.754 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.754 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.012 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:07.012 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.012 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:07.012 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:07.012 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.012 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.012 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.012 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.012 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.012 13:15:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.012 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.012 "name": "raid_bdev1", 00:21:07.012 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:07.012 "strip_size_kb": 0, 00:21:07.012 "state": "online", 00:21:07.012 "raid_level": "raid1", 00:21:07.012 "superblock": true, 00:21:07.012 "num_base_bdevs": 4, 00:21:07.012 "num_base_bdevs_discovered": 2, 00:21:07.012 "num_base_bdevs_operational": 2, 00:21:07.012 "base_bdevs_list": [ 00:21:07.012 { 00:21:07.012 "name": null, 00:21:07.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.012 "is_configured": false, 00:21:07.012 "data_offset": 0, 00:21:07.012 "data_size": 63488 00:21:07.012 }, 00:21:07.012 { 00:21:07.012 "name": null, 00:21:07.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.012 "is_configured": false, 00:21:07.012 "data_offset": 2048, 00:21:07.012 "data_size": 63488 00:21:07.012 }, 00:21:07.012 { 00:21:07.012 "name": "BaseBdev3", 00:21:07.012 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:07.012 "is_configured": true, 00:21:07.012 "data_offset": 2048, 00:21:07.012 "data_size": 63488 00:21:07.012 }, 00:21:07.012 { 00:21:07.012 "name": "BaseBdev4", 00:21:07.012 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:07.012 "is_configured": true, 00:21:07.012 "data_offset": 2048, 00:21:07.012 "data_size": 63488 00:21:07.012 } 00:21:07.012 ] 00:21:07.012 }' 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.271 [2024-12-06 13:15:54.135344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:07.271 [2024-12-06 13:15:54.135614] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:07.271 [2024-12-06 13:15:54.135642] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:07.271 request: 00:21:07.271 { 00:21:07.271 "base_bdev": "BaseBdev1", 00:21:07.271 "raid_bdev": "raid_bdev1", 00:21:07.271 "method": "bdev_raid_add_base_bdev", 00:21:07.271 "req_id": 1 00:21:07.271 } 00:21:07.271 Got JSON-RPC error response 00:21:07.271 response: 00:21:07.271 { 00:21:07.271 "code": -22, 00:21:07.271 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:07.271 } 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:07.271 13:15:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:08.207 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:08.207 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.207 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.207 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.207 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.207 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:08.207 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.207 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.207 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.207 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.208 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.208 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.208 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:08.208 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.208 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.208 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.208 "name": "raid_bdev1", 00:21:08.208 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:08.208 "strip_size_kb": 0, 00:21:08.208 "state": "online", 00:21:08.208 "raid_level": "raid1", 00:21:08.208 "superblock": true, 00:21:08.208 "num_base_bdevs": 4, 00:21:08.208 "num_base_bdevs_discovered": 2, 00:21:08.208 "num_base_bdevs_operational": 2, 00:21:08.208 "base_bdevs_list": [ 00:21:08.208 { 00:21:08.208 "name": null, 00:21:08.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.208 "is_configured": false, 00:21:08.208 "data_offset": 0, 00:21:08.208 "data_size": 63488 00:21:08.208 }, 00:21:08.208 { 00:21:08.208 "name": null, 00:21:08.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.208 "is_configured": false, 00:21:08.208 "data_offset": 2048, 00:21:08.208 "data_size": 63488 00:21:08.208 }, 00:21:08.208 { 00:21:08.208 "name": "BaseBdev3", 00:21:08.208 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:08.208 "is_configured": true, 00:21:08.208 "data_offset": 2048, 00:21:08.208 "data_size": 63488 00:21:08.208 }, 00:21:08.208 { 00:21:08.208 "name": "BaseBdev4", 00:21:08.208 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:08.208 "is_configured": true, 00:21:08.208 "data_offset": 2048, 00:21:08.208 "data_size": 63488 00:21:08.208 } 00:21:08.208 ] 00:21:08.208 }' 00:21:08.208 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.208 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:08.775 "name": "raid_bdev1", 00:21:08.775 "uuid": "dba32c6c-79ab-4de6-8704-f17b46305f26", 00:21:08.775 "strip_size_kb": 0, 00:21:08.775 "state": "online", 00:21:08.775 "raid_level": "raid1", 00:21:08.775 "superblock": true, 00:21:08.775 "num_base_bdevs": 4, 00:21:08.775 "num_base_bdevs_discovered": 2, 00:21:08.775 "num_base_bdevs_operational": 2, 00:21:08.775 "base_bdevs_list": [ 00:21:08.775 { 00:21:08.775 "name": null, 00:21:08.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.775 "is_configured": false, 00:21:08.775 "data_offset": 0, 00:21:08.775 "data_size": 63488 00:21:08.775 }, 00:21:08.775 { 00:21:08.775 "name": null, 00:21:08.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.775 "is_configured": false, 00:21:08.775 "data_offset": 2048, 00:21:08.775 "data_size": 63488 00:21:08.775 }, 00:21:08.775 { 00:21:08.775 "name": "BaseBdev3", 00:21:08.775 "uuid": "d1bb9c65-6f8d-5475-b11c-40483c2a20a4", 00:21:08.775 "is_configured": true, 00:21:08.775 "data_offset": 2048, 00:21:08.775 "data_size": 63488 00:21:08.775 }, 00:21:08.775 { 00:21:08.775 "name": "BaseBdev4", 00:21:08.775 "uuid": "b07f58aa-112c-5abc-81d4-13b96d687bb7", 00:21:08.775 "is_configured": true, 00:21:08.775 "data_offset": 2048, 00:21:08.775 "data_size": 63488 00:21:08.775 } 00:21:08.775 ] 00:21:08.775 }' 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:08.775 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79834 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79834 ']' 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79834 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79834 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.034 killing process with pid 79834 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79834' 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79834 00:21:09.034 Received shutdown signal, test time was about 19.673252 seconds 00:21:09.034 00:21:09.034 Latency(us) 00:21:09.034 [2024-12-06T13:15:56.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.034 [2024-12-06T13:15:56.050Z] =================================================================================================================== 00:21:09.034 [2024-12-06T13:15:56.050Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.034 13:15:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79834 00:21:09.034 [2024-12-06 13:15:55.862092] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:09.034 [2024-12-06 13:15:55.862288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:09.034 [2024-12-06 13:15:55.862394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:09.034 [2024-12-06 13:15:55.862415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:09.292 [2024-12-06 13:15:56.275270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:10.668 ************************************ 00:21:10.668 END TEST raid_rebuild_test_sb_io 00:21:10.668 ************************************ 00:21:10.668 13:15:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:21:10.668 00:21:10.668 real 0m23.459s 00:21:10.668 user 0m31.771s 00:21:10.668 sys 0m2.509s 00:21:10.668 13:15:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.668 13:15:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:10.668 13:15:57 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:21:10.668 13:15:57 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:21:10.668 13:15:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:10.668 13:15:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.668 13:15:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:10.668 ************************************ 00:21:10.668 START TEST raid5f_state_function_test 00:21:10.668 ************************************ 00:21:10.668 13:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:21:10.668 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:10.668 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:10.668 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:10.668 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:10.668 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:10.668 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.668 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80577 00:21:10.669 Process raid pid: 80577 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80577' 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80577 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80577 ']' 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.669 13:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.669 [2024-12-06 13:15:57.641366] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:21:10.669 [2024-12-06 13:15:57.641841] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.927 [2024-12-06 13:15:57.821338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.196 [2024-12-06 13:15:57.972870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.196 [2024-12-06 13:15:58.202411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.196 [2024-12-06 13:15:58.202502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.764 [2024-12-06 13:15:58.659302] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:11.764 [2024-12-06 13:15:58.659387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:11.764 [2024-12-06 13:15:58.659407] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:11.764 [2024-12-06 13:15:58.659425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:11.764 [2024-12-06 13:15:58.659442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:11.764 [2024-12-06 13:15:58.659458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.764 "name": "Existed_Raid", 00:21:11.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.764 "strip_size_kb": 64, 00:21:11.764 "state": "configuring", 00:21:11.764 "raid_level": "raid5f", 00:21:11.764 "superblock": false, 00:21:11.764 "num_base_bdevs": 3, 00:21:11.764 "num_base_bdevs_discovered": 0, 00:21:11.764 "num_base_bdevs_operational": 3, 00:21:11.764 "base_bdevs_list": [ 00:21:11.764 { 00:21:11.764 "name": "BaseBdev1", 00:21:11.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.764 "is_configured": false, 00:21:11.764 "data_offset": 0, 00:21:11.764 "data_size": 0 00:21:11.764 }, 00:21:11.764 { 00:21:11.764 "name": "BaseBdev2", 00:21:11.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.764 "is_configured": false, 00:21:11.764 "data_offset": 0, 00:21:11.764 "data_size": 0 00:21:11.764 }, 00:21:11.764 { 00:21:11.764 "name": "BaseBdev3", 00:21:11.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.764 "is_configured": false, 00:21:11.764 "data_offset": 0, 00:21:11.764 "data_size": 0 00:21:11.764 } 00:21:11.764 ] 00:21:11.764 }' 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.764 13:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.331 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:12.331 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.332 [2024-12-06 13:15:59.191372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:12.332 [2024-12-06 13:15:59.191422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.332 [2024-12-06 13:15:59.199350] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:12.332 [2024-12-06 13:15:59.199439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:12.332 [2024-12-06 13:15:59.199458] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:12.332 [2024-12-06 13:15:59.199476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:12.332 [2024-12-06 13:15:59.199506] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:12.332 [2024-12-06 13:15:59.199524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.332 [2024-12-06 13:15:59.247625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.332 BaseBdev1 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.332 [ 00:21:12.332 { 00:21:12.332 "name": "BaseBdev1", 00:21:12.332 "aliases": [ 00:21:12.332 "ccc5a955-9931-403d-9e3b-ebdf5308994b" 00:21:12.332 ], 00:21:12.332 "product_name": "Malloc disk", 00:21:12.332 "block_size": 512, 00:21:12.332 "num_blocks": 65536, 00:21:12.332 "uuid": "ccc5a955-9931-403d-9e3b-ebdf5308994b", 00:21:12.332 "assigned_rate_limits": { 00:21:12.332 "rw_ios_per_sec": 0, 00:21:12.332 "rw_mbytes_per_sec": 0, 00:21:12.332 "r_mbytes_per_sec": 0, 00:21:12.332 "w_mbytes_per_sec": 0 00:21:12.332 }, 00:21:12.332 "claimed": true, 00:21:12.332 "claim_type": "exclusive_write", 00:21:12.332 "zoned": false, 00:21:12.332 "supported_io_types": { 00:21:12.332 "read": true, 00:21:12.332 "write": true, 00:21:12.332 "unmap": true, 00:21:12.332 "flush": true, 00:21:12.332 "reset": true, 00:21:12.332 "nvme_admin": false, 00:21:12.332 "nvme_io": false, 00:21:12.332 "nvme_io_md": false, 00:21:12.332 "write_zeroes": true, 00:21:12.332 "zcopy": true, 00:21:12.332 "get_zone_info": false, 00:21:12.332 "zone_management": false, 00:21:12.332 "zone_append": false, 00:21:12.332 "compare": false, 00:21:12.332 "compare_and_write": false, 00:21:12.332 "abort": true, 00:21:12.332 "seek_hole": false, 00:21:12.332 "seek_data": false, 00:21:12.332 "copy": true, 00:21:12.332 "nvme_iov_md": false 00:21:12.332 }, 00:21:12.332 "memory_domains": [ 00:21:12.332 { 00:21:12.332 "dma_device_id": "system", 00:21:12.332 "dma_device_type": 1 00:21:12.332 }, 00:21:12.332 { 00:21:12.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.332 "dma_device_type": 2 00:21:12.332 } 00:21:12.332 ], 00:21:12.332 "driver_specific": {} 00:21:12.332 } 00:21:12.332 ] 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.332 "name": "Existed_Raid", 00:21:12.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.332 "strip_size_kb": 64, 00:21:12.332 "state": "configuring", 00:21:12.332 "raid_level": "raid5f", 00:21:12.332 "superblock": false, 00:21:12.332 "num_base_bdevs": 3, 00:21:12.332 "num_base_bdevs_discovered": 1, 00:21:12.332 "num_base_bdevs_operational": 3, 00:21:12.332 "base_bdevs_list": [ 00:21:12.332 { 00:21:12.332 "name": "BaseBdev1", 00:21:12.332 "uuid": "ccc5a955-9931-403d-9e3b-ebdf5308994b", 00:21:12.332 "is_configured": true, 00:21:12.332 "data_offset": 0, 00:21:12.332 "data_size": 65536 00:21:12.332 }, 00:21:12.332 { 00:21:12.332 "name": "BaseBdev2", 00:21:12.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.332 "is_configured": false, 00:21:12.332 "data_offset": 0, 00:21:12.332 "data_size": 0 00:21:12.332 }, 00:21:12.332 { 00:21:12.332 "name": "BaseBdev3", 00:21:12.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.332 "is_configured": false, 00:21:12.332 "data_offset": 0, 00:21:12.332 "data_size": 0 00:21:12.332 } 00:21:12.332 ] 00:21:12.332 }' 00:21:12.332 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.333 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.931 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:12.931 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.931 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.931 [2024-12-06 13:15:59.823835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:12.931 [2024-12-06 13:15:59.823917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.932 [2024-12-06 13:15:59.831868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.932 [2024-12-06 13:15:59.834510] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:12.932 [2024-12-06 13:15:59.834572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:12.932 [2024-12-06 13:15:59.834590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:12.932 [2024-12-06 13:15:59.834607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.932 "name": "Existed_Raid", 00:21:12.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.932 "strip_size_kb": 64, 00:21:12.932 "state": "configuring", 00:21:12.932 "raid_level": "raid5f", 00:21:12.932 "superblock": false, 00:21:12.932 "num_base_bdevs": 3, 00:21:12.932 "num_base_bdevs_discovered": 1, 00:21:12.932 "num_base_bdevs_operational": 3, 00:21:12.932 "base_bdevs_list": [ 00:21:12.932 { 00:21:12.932 "name": "BaseBdev1", 00:21:12.932 "uuid": "ccc5a955-9931-403d-9e3b-ebdf5308994b", 00:21:12.932 "is_configured": true, 00:21:12.932 "data_offset": 0, 00:21:12.932 "data_size": 65536 00:21:12.932 }, 00:21:12.932 { 00:21:12.932 "name": "BaseBdev2", 00:21:12.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.932 "is_configured": false, 00:21:12.932 "data_offset": 0, 00:21:12.932 "data_size": 0 00:21:12.932 }, 00:21:12.932 { 00:21:12.932 "name": "BaseBdev3", 00:21:12.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.932 "is_configured": false, 00:21:12.932 "data_offset": 0, 00:21:12.932 "data_size": 0 00:21:12.932 } 00:21:12.932 ] 00:21:12.932 }' 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.932 13:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.523 [2024-12-06 13:16:00.406290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.523 BaseBdev2 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.523 [ 00:21:13.523 { 00:21:13.523 "name": "BaseBdev2", 00:21:13.523 "aliases": [ 00:21:13.523 "64e78263-ca59-4a3e-bb39-bd48a897a436" 00:21:13.523 ], 00:21:13.523 "product_name": "Malloc disk", 00:21:13.523 "block_size": 512, 00:21:13.523 "num_blocks": 65536, 00:21:13.523 "uuid": "64e78263-ca59-4a3e-bb39-bd48a897a436", 00:21:13.523 "assigned_rate_limits": { 00:21:13.523 "rw_ios_per_sec": 0, 00:21:13.523 "rw_mbytes_per_sec": 0, 00:21:13.523 "r_mbytes_per_sec": 0, 00:21:13.523 "w_mbytes_per_sec": 0 00:21:13.523 }, 00:21:13.523 "claimed": true, 00:21:13.523 "claim_type": "exclusive_write", 00:21:13.523 "zoned": false, 00:21:13.523 "supported_io_types": { 00:21:13.523 "read": true, 00:21:13.523 "write": true, 00:21:13.523 "unmap": true, 00:21:13.523 "flush": true, 00:21:13.523 "reset": true, 00:21:13.523 "nvme_admin": false, 00:21:13.523 "nvme_io": false, 00:21:13.523 "nvme_io_md": false, 00:21:13.523 "write_zeroes": true, 00:21:13.523 "zcopy": true, 00:21:13.523 "get_zone_info": false, 00:21:13.523 "zone_management": false, 00:21:13.523 "zone_append": false, 00:21:13.523 "compare": false, 00:21:13.523 "compare_and_write": false, 00:21:13.523 "abort": true, 00:21:13.523 "seek_hole": false, 00:21:13.523 "seek_data": false, 00:21:13.523 "copy": true, 00:21:13.523 "nvme_iov_md": false 00:21:13.523 }, 00:21:13.523 "memory_domains": [ 00:21:13.523 { 00:21:13.523 "dma_device_id": "system", 00:21:13.523 "dma_device_type": 1 00:21:13.523 }, 00:21:13.523 { 00:21:13.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.523 "dma_device_type": 2 00:21:13.523 } 00:21:13.523 ], 00:21:13.523 "driver_specific": {} 00:21:13.523 } 00:21:13.523 ] 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:13.523 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.524 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.524 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.524 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.524 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.524 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.524 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.524 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.524 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.524 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.524 "name": "Existed_Raid", 00:21:13.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.524 "strip_size_kb": 64, 00:21:13.524 "state": "configuring", 00:21:13.524 "raid_level": "raid5f", 00:21:13.524 "superblock": false, 00:21:13.524 "num_base_bdevs": 3, 00:21:13.524 "num_base_bdevs_discovered": 2, 00:21:13.524 "num_base_bdevs_operational": 3, 00:21:13.524 "base_bdevs_list": [ 00:21:13.524 { 00:21:13.524 "name": "BaseBdev1", 00:21:13.524 "uuid": "ccc5a955-9931-403d-9e3b-ebdf5308994b", 00:21:13.524 "is_configured": true, 00:21:13.524 "data_offset": 0, 00:21:13.524 "data_size": 65536 00:21:13.524 }, 00:21:13.524 { 00:21:13.524 "name": "BaseBdev2", 00:21:13.524 "uuid": "64e78263-ca59-4a3e-bb39-bd48a897a436", 00:21:13.524 "is_configured": true, 00:21:13.524 "data_offset": 0, 00:21:13.524 "data_size": 65536 00:21:13.524 }, 00:21:13.524 { 00:21:13.524 "name": "BaseBdev3", 00:21:13.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.524 "is_configured": false, 00:21:13.524 "data_offset": 0, 00:21:13.524 "data_size": 0 00:21:13.524 } 00:21:13.524 ] 00:21:13.524 }' 00:21:13.524 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.524 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.092 13:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:14.092 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.092 13:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.092 [2024-12-06 13:16:01.025525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:14.092 [2024-12-06 13:16:01.025855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:14.092 [2024-12-06 13:16:01.025891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:14.092 [2024-12-06 13:16:01.026464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:14.092 [2024-12-06 13:16:01.031961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:14.092 [2024-12-06 13:16:01.031991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:14.092 [2024-12-06 13:16:01.032362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.092 BaseBdev3 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.092 [ 00:21:14.092 { 00:21:14.092 "name": "BaseBdev3", 00:21:14.092 "aliases": [ 00:21:14.092 "dd82b698-c927-4cc8-b83d-f28b1189b72d" 00:21:14.092 ], 00:21:14.092 "product_name": "Malloc disk", 00:21:14.092 "block_size": 512, 00:21:14.092 "num_blocks": 65536, 00:21:14.092 "uuid": "dd82b698-c927-4cc8-b83d-f28b1189b72d", 00:21:14.092 "assigned_rate_limits": { 00:21:14.092 "rw_ios_per_sec": 0, 00:21:14.092 "rw_mbytes_per_sec": 0, 00:21:14.092 "r_mbytes_per_sec": 0, 00:21:14.092 "w_mbytes_per_sec": 0 00:21:14.092 }, 00:21:14.092 "claimed": true, 00:21:14.092 "claim_type": "exclusive_write", 00:21:14.092 "zoned": false, 00:21:14.092 "supported_io_types": { 00:21:14.092 "read": true, 00:21:14.092 "write": true, 00:21:14.092 "unmap": true, 00:21:14.092 "flush": true, 00:21:14.092 "reset": true, 00:21:14.092 "nvme_admin": false, 00:21:14.092 "nvme_io": false, 00:21:14.092 "nvme_io_md": false, 00:21:14.092 "write_zeroes": true, 00:21:14.092 "zcopy": true, 00:21:14.092 "get_zone_info": false, 00:21:14.092 "zone_management": false, 00:21:14.092 "zone_append": false, 00:21:14.092 "compare": false, 00:21:14.092 "compare_and_write": false, 00:21:14.092 "abort": true, 00:21:14.092 "seek_hole": false, 00:21:14.092 "seek_data": false, 00:21:14.092 "copy": true, 00:21:14.092 "nvme_iov_md": false 00:21:14.092 }, 00:21:14.092 "memory_domains": [ 00:21:14.092 { 00:21:14.092 "dma_device_id": "system", 00:21:14.092 "dma_device_type": 1 00:21:14.092 }, 00:21:14.092 { 00:21:14.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.092 "dma_device_type": 2 00:21:14.092 } 00:21:14.092 ], 00:21:14.092 "driver_specific": {} 00:21:14.092 } 00:21:14.092 ] 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.092 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.351 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.351 "name": "Existed_Raid", 00:21:14.351 "uuid": "ee715f06-f87a-49ac-b035-38819440ddb9", 00:21:14.351 "strip_size_kb": 64, 00:21:14.351 "state": "online", 00:21:14.351 "raid_level": "raid5f", 00:21:14.351 "superblock": false, 00:21:14.351 "num_base_bdevs": 3, 00:21:14.351 "num_base_bdevs_discovered": 3, 00:21:14.351 "num_base_bdevs_operational": 3, 00:21:14.351 "base_bdevs_list": [ 00:21:14.351 { 00:21:14.351 "name": "BaseBdev1", 00:21:14.351 "uuid": "ccc5a955-9931-403d-9e3b-ebdf5308994b", 00:21:14.351 "is_configured": true, 00:21:14.351 "data_offset": 0, 00:21:14.351 "data_size": 65536 00:21:14.351 }, 00:21:14.351 { 00:21:14.351 "name": "BaseBdev2", 00:21:14.351 "uuid": "64e78263-ca59-4a3e-bb39-bd48a897a436", 00:21:14.351 "is_configured": true, 00:21:14.351 "data_offset": 0, 00:21:14.351 "data_size": 65536 00:21:14.351 }, 00:21:14.351 { 00:21:14.351 "name": "BaseBdev3", 00:21:14.351 "uuid": "dd82b698-c927-4cc8-b83d-f28b1189b72d", 00:21:14.351 "is_configured": true, 00:21:14.351 "data_offset": 0, 00:21:14.351 "data_size": 65536 00:21:14.351 } 00:21:14.351 ] 00:21:14.351 }' 00:21:14.351 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.351 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.611 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:14.611 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:14.611 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:14.611 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:14.611 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:14.611 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:14.611 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:14.611 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.611 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.611 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:14.611 [2024-12-06 13:16:01.594846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.611 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.870 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:14.870 "name": "Existed_Raid", 00:21:14.870 "aliases": [ 00:21:14.870 "ee715f06-f87a-49ac-b035-38819440ddb9" 00:21:14.871 ], 00:21:14.871 "product_name": "Raid Volume", 00:21:14.871 "block_size": 512, 00:21:14.871 "num_blocks": 131072, 00:21:14.871 "uuid": "ee715f06-f87a-49ac-b035-38819440ddb9", 00:21:14.871 "assigned_rate_limits": { 00:21:14.871 "rw_ios_per_sec": 0, 00:21:14.871 "rw_mbytes_per_sec": 0, 00:21:14.871 "r_mbytes_per_sec": 0, 00:21:14.871 "w_mbytes_per_sec": 0 00:21:14.871 }, 00:21:14.871 "claimed": false, 00:21:14.871 "zoned": false, 00:21:14.871 "supported_io_types": { 00:21:14.871 "read": true, 00:21:14.871 "write": true, 00:21:14.871 "unmap": false, 00:21:14.871 "flush": false, 00:21:14.871 "reset": true, 00:21:14.871 "nvme_admin": false, 00:21:14.871 "nvme_io": false, 00:21:14.871 "nvme_io_md": false, 00:21:14.871 "write_zeroes": true, 00:21:14.871 "zcopy": false, 00:21:14.871 "get_zone_info": false, 00:21:14.871 "zone_management": false, 00:21:14.871 "zone_append": false, 00:21:14.871 "compare": false, 00:21:14.871 "compare_and_write": false, 00:21:14.871 "abort": false, 00:21:14.871 "seek_hole": false, 00:21:14.871 "seek_data": false, 00:21:14.871 "copy": false, 00:21:14.871 "nvme_iov_md": false 00:21:14.871 }, 00:21:14.871 "driver_specific": { 00:21:14.871 "raid": { 00:21:14.871 "uuid": "ee715f06-f87a-49ac-b035-38819440ddb9", 00:21:14.871 "strip_size_kb": 64, 00:21:14.871 "state": "online", 00:21:14.871 "raid_level": "raid5f", 00:21:14.871 "superblock": false, 00:21:14.871 "num_base_bdevs": 3, 00:21:14.871 "num_base_bdevs_discovered": 3, 00:21:14.871 "num_base_bdevs_operational": 3, 00:21:14.871 "base_bdevs_list": [ 00:21:14.871 { 00:21:14.871 "name": "BaseBdev1", 00:21:14.871 "uuid": "ccc5a955-9931-403d-9e3b-ebdf5308994b", 00:21:14.871 "is_configured": true, 00:21:14.871 "data_offset": 0, 00:21:14.871 "data_size": 65536 00:21:14.871 }, 00:21:14.871 { 00:21:14.871 "name": "BaseBdev2", 00:21:14.871 "uuid": "64e78263-ca59-4a3e-bb39-bd48a897a436", 00:21:14.871 "is_configured": true, 00:21:14.871 "data_offset": 0, 00:21:14.871 "data_size": 65536 00:21:14.871 }, 00:21:14.871 { 00:21:14.871 "name": "BaseBdev3", 00:21:14.871 "uuid": "dd82b698-c927-4cc8-b83d-f28b1189b72d", 00:21:14.871 "is_configured": true, 00:21:14.871 "data_offset": 0, 00:21:14.871 "data_size": 65536 00:21:14.871 } 00:21:14.871 ] 00:21:14.871 } 00:21:14.871 } 00:21:14.871 }' 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:14.871 BaseBdev2 00:21:14.871 BaseBdev3' 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:14.871 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.129 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.129 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.129 13:16:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:15.129 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.129 13:16:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.129 [2024-12-06 13:16:01.922717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.129 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.129 "name": "Existed_Raid", 00:21:15.129 "uuid": "ee715f06-f87a-49ac-b035-38819440ddb9", 00:21:15.129 "strip_size_kb": 64, 00:21:15.129 "state": "online", 00:21:15.129 "raid_level": "raid5f", 00:21:15.129 "superblock": false, 00:21:15.129 "num_base_bdevs": 3, 00:21:15.129 "num_base_bdevs_discovered": 2, 00:21:15.129 "num_base_bdevs_operational": 2, 00:21:15.130 "base_bdevs_list": [ 00:21:15.130 { 00:21:15.130 "name": null, 00:21:15.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.130 "is_configured": false, 00:21:15.130 "data_offset": 0, 00:21:15.130 "data_size": 65536 00:21:15.130 }, 00:21:15.130 { 00:21:15.130 "name": "BaseBdev2", 00:21:15.130 "uuid": "64e78263-ca59-4a3e-bb39-bd48a897a436", 00:21:15.130 "is_configured": true, 00:21:15.130 "data_offset": 0, 00:21:15.130 "data_size": 65536 00:21:15.130 }, 00:21:15.130 { 00:21:15.130 "name": "BaseBdev3", 00:21:15.130 "uuid": "dd82b698-c927-4cc8-b83d-f28b1189b72d", 00:21:15.130 "is_configured": true, 00:21:15.130 "data_offset": 0, 00:21:15.130 "data_size": 65536 00:21:15.130 } 00:21:15.130 ] 00:21:15.130 }' 00:21:15.130 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.130 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.695 [2024-12-06 13:16:02.565297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:15.695 [2024-12-06 13:16:02.565446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.695 [2024-12-06 13:16:02.656255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.695 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.953 [2024-12-06 13:16:02.712370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:15.953 [2024-12-06 13:16:02.712454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.953 BaseBdev2 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.953 [ 00:21:15.953 { 00:21:15.953 "name": "BaseBdev2", 00:21:15.953 "aliases": [ 00:21:15.953 "484c42d1-12d4-41c0-800e-76d90f1496f9" 00:21:15.953 ], 00:21:15.953 "product_name": "Malloc disk", 00:21:15.953 "block_size": 512, 00:21:15.953 "num_blocks": 65536, 00:21:15.953 "uuid": "484c42d1-12d4-41c0-800e-76d90f1496f9", 00:21:15.953 "assigned_rate_limits": { 00:21:15.953 "rw_ios_per_sec": 0, 00:21:15.953 "rw_mbytes_per_sec": 0, 00:21:15.953 "r_mbytes_per_sec": 0, 00:21:15.953 "w_mbytes_per_sec": 0 00:21:15.953 }, 00:21:15.953 "claimed": false, 00:21:15.953 "zoned": false, 00:21:15.953 "supported_io_types": { 00:21:15.953 "read": true, 00:21:15.953 "write": true, 00:21:15.953 "unmap": true, 00:21:15.953 "flush": true, 00:21:15.953 "reset": true, 00:21:15.953 "nvme_admin": false, 00:21:15.953 "nvme_io": false, 00:21:15.953 "nvme_io_md": false, 00:21:15.953 "write_zeroes": true, 00:21:15.953 "zcopy": true, 00:21:15.953 "get_zone_info": false, 00:21:15.953 "zone_management": false, 00:21:15.953 "zone_append": false, 00:21:15.953 "compare": false, 00:21:15.953 "compare_and_write": false, 00:21:15.953 "abort": true, 00:21:15.953 "seek_hole": false, 00:21:15.953 "seek_data": false, 00:21:15.953 "copy": true, 00:21:15.953 "nvme_iov_md": false 00:21:15.953 }, 00:21:15.953 "memory_domains": [ 00:21:15.953 { 00:21:15.953 "dma_device_id": "system", 00:21:15.953 "dma_device_type": 1 00:21:15.953 }, 00:21:15.953 { 00:21:15.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.953 "dma_device_type": 2 00:21:15.953 } 00:21:15.953 ], 00:21:15.953 "driver_specific": {} 00:21:15.953 } 00:21:15.953 ] 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.953 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.212 BaseBdev3 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.212 13:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.212 [ 00:21:16.212 { 00:21:16.212 "name": "BaseBdev3", 00:21:16.212 "aliases": [ 00:21:16.212 "f12c1be4-7c93-4cac-8a0f-1358e20f4b40" 00:21:16.212 ], 00:21:16.212 "product_name": "Malloc disk", 00:21:16.212 "block_size": 512, 00:21:16.212 "num_blocks": 65536, 00:21:16.212 "uuid": "f12c1be4-7c93-4cac-8a0f-1358e20f4b40", 00:21:16.212 "assigned_rate_limits": { 00:21:16.212 "rw_ios_per_sec": 0, 00:21:16.212 "rw_mbytes_per_sec": 0, 00:21:16.212 "r_mbytes_per_sec": 0, 00:21:16.212 "w_mbytes_per_sec": 0 00:21:16.212 }, 00:21:16.212 "claimed": false, 00:21:16.212 "zoned": false, 00:21:16.212 "supported_io_types": { 00:21:16.212 "read": true, 00:21:16.212 "write": true, 00:21:16.212 "unmap": true, 00:21:16.212 "flush": true, 00:21:16.212 "reset": true, 00:21:16.212 "nvme_admin": false, 00:21:16.212 "nvme_io": false, 00:21:16.212 "nvme_io_md": false, 00:21:16.212 "write_zeroes": true, 00:21:16.212 "zcopy": true, 00:21:16.212 "get_zone_info": false, 00:21:16.212 "zone_management": false, 00:21:16.212 "zone_append": false, 00:21:16.212 "compare": false, 00:21:16.212 "compare_and_write": false, 00:21:16.212 "abort": true, 00:21:16.212 "seek_hole": false, 00:21:16.212 "seek_data": false, 00:21:16.212 "copy": true, 00:21:16.212 "nvme_iov_md": false 00:21:16.212 }, 00:21:16.212 "memory_domains": [ 00:21:16.212 { 00:21:16.212 "dma_device_id": "system", 00:21:16.212 "dma_device_type": 1 00:21:16.212 }, 00:21:16.212 { 00:21:16.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.213 "dma_device_type": 2 00:21:16.213 } 00:21:16.213 ], 00:21:16.213 "driver_specific": {} 00:21:16.213 } 00:21:16.213 ] 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.213 [2024-12-06 13:16:03.006134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:16.213 [2024-12-06 13:16:03.006199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:16.213 [2024-12-06 13:16:03.006234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:16.213 [2024-12-06 13:16:03.008854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.213 "name": "Existed_Raid", 00:21:16.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.213 "strip_size_kb": 64, 00:21:16.213 "state": "configuring", 00:21:16.213 "raid_level": "raid5f", 00:21:16.213 "superblock": false, 00:21:16.213 "num_base_bdevs": 3, 00:21:16.213 "num_base_bdevs_discovered": 2, 00:21:16.213 "num_base_bdevs_operational": 3, 00:21:16.213 "base_bdevs_list": [ 00:21:16.213 { 00:21:16.213 "name": "BaseBdev1", 00:21:16.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.213 "is_configured": false, 00:21:16.213 "data_offset": 0, 00:21:16.213 "data_size": 0 00:21:16.213 }, 00:21:16.213 { 00:21:16.213 "name": "BaseBdev2", 00:21:16.213 "uuid": "484c42d1-12d4-41c0-800e-76d90f1496f9", 00:21:16.213 "is_configured": true, 00:21:16.213 "data_offset": 0, 00:21:16.213 "data_size": 65536 00:21:16.213 }, 00:21:16.213 { 00:21:16.213 "name": "BaseBdev3", 00:21:16.213 "uuid": "f12c1be4-7c93-4cac-8a0f-1358e20f4b40", 00:21:16.213 "is_configured": true, 00:21:16.213 "data_offset": 0, 00:21:16.213 "data_size": 65536 00:21:16.213 } 00:21:16.213 ] 00:21:16.213 }' 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.213 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.788 [2024-12-06 13:16:03.530321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:16.788 "name": "Existed_Raid", 00:21:16.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.788 "strip_size_kb": 64, 00:21:16.788 "state": "configuring", 00:21:16.788 "raid_level": "raid5f", 00:21:16.788 "superblock": false, 00:21:16.788 "num_base_bdevs": 3, 00:21:16.788 "num_base_bdevs_discovered": 1, 00:21:16.788 "num_base_bdevs_operational": 3, 00:21:16.788 "base_bdevs_list": [ 00:21:16.788 { 00:21:16.788 "name": "BaseBdev1", 00:21:16.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.788 "is_configured": false, 00:21:16.788 "data_offset": 0, 00:21:16.788 "data_size": 0 00:21:16.788 }, 00:21:16.788 { 00:21:16.788 "name": null, 00:21:16.788 "uuid": "484c42d1-12d4-41c0-800e-76d90f1496f9", 00:21:16.788 "is_configured": false, 00:21:16.788 "data_offset": 0, 00:21:16.788 "data_size": 65536 00:21:16.788 }, 00:21:16.788 { 00:21:16.788 "name": "BaseBdev3", 00:21:16.788 "uuid": "f12c1be4-7c93-4cac-8a0f-1358e20f4b40", 00:21:16.788 "is_configured": true, 00:21:16.788 "data_offset": 0, 00:21:16.788 "data_size": 65536 00:21:16.788 } 00:21:16.788 ] 00:21:16.788 }' 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:16.788 13:16:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.046 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:17.046 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.046 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.046 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.305 [2024-12-06 13:16:04.131807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.305 BaseBdev1 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.305 [ 00:21:17.305 { 00:21:17.305 "name": "BaseBdev1", 00:21:17.305 "aliases": [ 00:21:17.305 "dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3" 00:21:17.305 ], 00:21:17.305 "product_name": "Malloc disk", 00:21:17.305 "block_size": 512, 00:21:17.305 "num_blocks": 65536, 00:21:17.305 "uuid": "dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3", 00:21:17.305 "assigned_rate_limits": { 00:21:17.305 "rw_ios_per_sec": 0, 00:21:17.305 "rw_mbytes_per_sec": 0, 00:21:17.305 "r_mbytes_per_sec": 0, 00:21:17.305 "w_mbytes_per_sec": 0 00:21:17.305 }, 00:21:17.305 "claimed": true, 00:21:17.305 "claim_type": "exclusive_write", 00:21:17.305 "zoned": false, 00:21:17.305 "supported_io_types": { 00:21:17.305 "read": true, 00:21:17.305 "write": true, 00:21:17.305 "unmap": true, 00:21:17.305 "flush": true, 00:21:17.305 "reset": true, 00:21:17.305 "nvme_admin": false, 00:21:17.305 "nvme_io": false, 00:21:17.305 "nvme_io_md": false, 00:21:17.305 "write_zeroes": true, 00:21:17.305 "zcopy": true, 00:21:17.305 "get_zone_info": false, 00:21:17.305 "zone_management": false, 00:21:17.305 "zone_append": false, 00:21:17.305 "compare": false, 00:21:17.305 "compare_and_write": false, 00:21:17.305 "abort": true, 00:21:17.305 "seek_hole": false, 00:21:17.305 "seek_data": false, 00:21:17.305 "copy": true, 00:21:17.305 "nvme_iov_md": false 00:21:17.305 }, 00:21:17.305 "memory_domains": [ 00:21:17.305 { 00:21:17.305 "dma_device_id": "system", 00:21:17.305 "dma_device_type": 1 00:21:17.305 }, 00:21:17.305 { 00:21:17.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.305 "dma_device_type": 2 00:21:17.305 } 00:21:17.305 ], 00:21:17.305 "driver_specific": {} 00:21:17.305 } 00:21:17.305 ] 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.305 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.305 "name": "Existed_Raid", 00:21:17.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.305 "strip_size_kb": 64, 00:21:17.305 "state": "configuring", 00:21:17.305 "raid_level": "raid5f", 00:21:17.305 "superblock": false, 00:21:17.305 "num_base_bdevs": 3, 00:21:17.305 "num_base_bdevs_discovered": 2, 00:21:17.305 "num_base_bdevs_operational": 3, 00:21:17.305 "base_bdevs_list": [ 00:21:17.305 { 00:21:17.306 "name": "BaseBdev1", 00:21:17.306 "uuid": "dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3", 00:21:17.306 "is_configured": true, 00:21:17.306 "data_offset": 0, 00:21:17.306 "data_size": 65536 00:21:17.306 }, 00:21:17.306 { 00:21:17.306 "name": null, 00:21:17.306 "uuid": "484c42d1-12d4-41c0-800e-76d90f1496f9", 00:21:17.306 "is_configured": false, 00:21:17.306 "data_offset": 0, 00:21:17.306 "data_size": 65536 00:21:17.306 }, 00:21:17.306 { 00:21:17.306 "name": "BaseBdev3", 00:21:17.306 "uuid": "f12c1be4-7c93-4cac-8a0f-1358e20f4b40", 00:21:17.306 "is_configured": true, 00:21:17.306 "data_offset": 0, 00:21:17.306 "data_size": 65536 00:21:17.306 } 00:21:17.306 ] 00:21:17.306 }' 00:21:17.306 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.306 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.872 [2024-12-06 13:16:04.712076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.872 "name": "Existed_Raid", 00:21:17.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.872 "strip_size_kb": 64, 00:21:17.872 "state": "configuring", 00:21:17.872 "raid_level": "raid5f", 00:21:17.872 "superblock": false, 00:21:17.872 "num_base_bdevs": 3, 00:21:17.872 "num_base_bdevs_discovered": 1, 00:21:17.872 "num_base_bdevs_operational": 3, 00:21:17.872 "base_bdevs_list": [ 00:21:17.872 { 00:21:17.872 "name": "BaseBdev1", 00:21:17.872 "uuid": "dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3", 00:21:17.872 "is_configured": true, 00:21:17.872 "data_offset": 0, 00:21:17.872 "data_size": 65536 00:21:17.872 }, 00:21:17.872 { 00:21:17.872 "name": null, 00:21:17.872 "uuid": "484c42d1-12d4-41c0-800e-76d90f1496f9", 00:21:17.872 "is_configured": false, 00:21:17.872 "data_offset": 0, 00:21:17.872 "data_size": 65536 00:21:17.872 }, 00:21:17.872 { 00:21:17.872 "name": null, 00:21:17.872 "uuid": "f12c1be4-7c93-4cac-8a0f-1358e20f4b40", 00:21:17.872 "is_configured": false, 00:21:17.872 "data_offset": 0, 00:21:17.872 "data_size": 65536 00:21:17.872 } 00:21:17.872 ] 00:21:17.872 }' 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.872 13:16:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.450 [2024-12-06 13:16:05.264244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.450 "name": "Existed_Raid", 00:21:18.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.450 "strip_size_kb": 64, 00:21:18.450 "state": "configuring", 00:21:18.450 "raid_level": "raid5f", 00:21:18.450 "superblock": false, 00:21:18.450 "num_base_bdevs": 3, 00:21:18.450 "num_base_bdevs_discovered": 2, 00:21:18.450 "num_base_bdevs_operational": 3, 00:21:18.450 "base_bdevs_list": [ 00:21:18.450 { 00:21:18.450 "name": "BaseBdev1", 00:21:18.450 "uuid": "dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3", 00:21:18.450 "is_configured": true, 00:21:18.450 "data_offset": 0, 00:21:18.450 "data_size": 65536 00:21:18.450 }, 00:21:18.450 { 00:21:18.450 "name": null, 00:21:18.450 "uuid": "484c42d1-12d4-41c0-800e-76d90f1496f9", 00:21:18.450 "is_configured": false, 00:21:18.450 "data_offset": 0, 00:21:18.450 "data_size": 65536 00:21:18.450 }, 00:21:18.450 { 00:21:18.450 "name": "BaseBdev3", 00:21:18.450 "uuid": "f12c1be4-7c93-4cac-8a0f-1358e20f4b40", 00:21:18.450 "is_configured": true, 00:21:18.450 "data_offset": 0, 00:21:18.450 "data_size": 65536 00:21:18.450 } 00:21:18.450 ] 00:21:18.450 }' 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.450 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.708 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:18.708 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.708 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.708 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.966 [2024-12-06 13:16:05.756357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.966 "name": "Existed_Raid", 00:21:18.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.966 "strip_size_kb": 64, 00:21:18.966 "state": "configuring", 00:21:18.966 "raid_level": "raid5f", 00:21:18.966 "superblock": false, 00:21:18.966 "num_base_bdevs": 3, 00:21:18.966 "num_base_bdevs_discovered": 1, 00:21:18.966 "num_base_bdevs_operational": 3, 00:21:18.966 "base_bdevs_list": [ 00:21:18.966 { 00:21:18.966 "name": null, 00:21:18.966 "uuid": "dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3", 00:21:18.966 "is_configured": false, 00:21:18.966 "data_offset": 0, 00:21:18.966 "data_size": 65536 00:21:18.966 }, 00:21:18.966 { 00:21:18.966 "name": null, 00:21:18.966 "uuid": "484c42d1-12d4-41c0-800e-76d90f1496f9", 00:21:18.966 "is_configured": false, 00:21:18.966 "data_offset": 0, 00:21:18.966 "data_size": 65536 00:21:18.966 }, 00:21:18.966 { 00:21:18.966 "name": "BaseBdev3", 00:21:18.966 "uuid": "f12c1be4-7c93-4cac-8a0f-1358e20f4b40", 00:21:18.966 "is_configured": true, 00:21:18.966 "data_offset": 0, 00:21:18.966 "data_size": 65536 00:21:18.966 } 00:21:18.966 ] 00:21:18.966 }' 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.966 13:16:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.533 [2024-12-06 13:16:06.413729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.533 "name": "Existed_Raid", 00:21:19.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.533 "strip_size_kb": 64, 00:21:19.533 "state": "configuring", 00:21:19.533 "raid_level": "raid5f", 00:21:19.533 "superblock": false, 00:21:19.533 "num_base_bdevs": 3, 00:21:19.533 "num_base_bdevs_discovered": 2, 00:21:19.533 "num_base_bdevs_operational": 3, 00:21:19.533 "base_bdevs_list": [ 00:21:19.533 { 00:21:19.533 "name": null, 00:21:19.533 "uuid": "dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3", 00:21:19.533 "is_configured": false, 00:21:19.533 "data_offset": 0, 00:21:19.533 "data_size": 65536 00:21:19.533 }, 00:21:19.533 { 00:21:19.533 "name": "BaseBdev2", 00:21:19.533 "uuid": "484c42d1-12d4-41c0-800e-76d90f1496f9", 00:21:19.533 "is_configured": true, 00:21:19.533 "data_offset": 0, 00:21:19.533 "data_size": 65536 00:21:19.533 }, 00:21:19.533 { 00:21:19.533 "name": "BaseBdev3", 00:21:19.533 "uuid": "f12c1be4-7c93-4cac-8a0f-1358e20f4b40", 00:21:19.533 "is_configured": true, 00:21:19.533 "data_offset": 0, 00:21:19.533 "data_size": 65536 00:21:19.533 } 00:21:19.533 ] 00:21:19.533 }' 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.533 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.100 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:20.100 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.100 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.100 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.100 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.100 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:20.100 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:20.100 13:16:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.100 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.100 13:16:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.100 [2024-12-06 13:16:07.079238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:20.100 [2024-12-06 13:16:07.079313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:20.100 [2024-12-06 13:16:07.079332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:20.100 [2024-12-06 13:16:07.079704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:20.100 [2024-12-06 13:16:07.084682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:20.100 [2024-12-06 13:16:07.084715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:20.100 [2024-12-06 13:16:07.085077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.100 NewBaseBdev 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.100 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.100 [ 00:21:20.100 { 00:21:20.100 "name": "NewBaseBdev", 00:21:20.100 "aliases": [ 00:21:20.100 "dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3" 00:21:20.100 ], 00:21:20.100 "product_name": "Malloc disk", 00:21:20.100 "block_size": 512, 00:21:20.100 "num_blocks": 65536, 00:21:20.100 "uuid": "dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3", 00:21:20.100 "assigned_rate_limits": { 00:21:20.100 "rw_ios_per_sec": 0, 00:21:20.100 "rw_mbytes_per_sec": 0, 00:21:20.100 "r_mbytes_per_sec": 0, 00:21:20.100 "w_mbytes_per_sec": 0 00:21:20.100 }, 00:21:20.100 "claimed": true, 00:21:20.100 "claim_type": "exclusive_write", 00:21:20.100 "zoned": false, 00:21:20.100 "supported_io_types": { 00:21:20.100 "read": true, 00:21:20.100 "write": true, 00:21:20.100 "unmap": true, 00:21:20.100 "flush": true, 00:21:20.100 "reset": true, 00:21:20.100 "nvme_admin": false, 00:21:20.100 "nvme_io": false, 00:21:20.100 "nvme_io_md": false, 00:21:20.100 "write_zeroes": true, 00:21:20.100 "zcopy": true, 00:21:20.100 "get_zone_info": false, 00:21:20.100 "zone_management": false, 00:21:20.100 "zone_append": false, 00:21:20.100 "compare": false, 00:21:20.100 "compare_and_write": false, 00:21:20.100 "abort": true, 00:21:20.100 "seek_hole": false, 00:21:20.100 "seek_data": false, 00:21:20.100 "copy": true, 00:21:20.100 "nvme_iov_md": false 00:21:20.100 }, 00:21:20.100 "memory_domains": [ 00:21:20.100 { 00:21:20.101 "dma_device_id": "system", 00:21:20.101 "dma_device_type": 1 00:21:20.101 }, 00:21:20.101 { 00:21:20.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.101 "dma_device_type": 2 00:21:20.101 } 00:21:20.101 ], 00:21:20.101 "driver_specific": {} 00:21:20.101 } 00:21:20.101 ] 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.101 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.378 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.378 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.378 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.378 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.378 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.378 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.378 "name": "Existed_Raid", 00:21:20.378 "uuid": "e69b0758-55df-4dc9-b56f-19ce35769a49", 00:21:20.378 "strip_size_kb": 64, 00:21:20.378 "state": "online", 00:21:20.378 "raid_level": "raid5f", 00:21:20.378 "superblock": false, 00:21:20.378 "num_base_bdevs": 3, 00:21:20.378 "num_base_bdevs_discovered": 3, 00:21:20.378 "num_base_bdevs_operational": 3, 00:21:20.378 "base_bdevs_list": [ 00:21:20.378 { 00:21:20.378 "name": "NewBaseBdev", 00:21:20.378 "uuid": "dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3", 00:21:20.378 "is_configured": true, 00:21:20.378 "data_offset": 0, 00:21:20.378 "data_size": 65536 00:21:20.378 }, 00:21:20.378 { 00:21:20.378 "name": "BaseBdev2", 00:21:20.378 "uuid": "484c42d1-12d4-41c0-800e-76d90f1496f9", 00:21:20.378 "is_configured": true, 00:21:20.378 "data_offset": 0, 00:21:20.378 "data_size": 65536 00:21:20.378 }, 00:21:20.378 { 00:21:20.378 "name": "BaseBdev3", 00:21:20.378 "uuid": "f12c1be4-7c93-4cac-8a0f-1358e20f4b40", 00:21:20.378 "is_configured": true, 00:21:20.378 "data_offset": 0, 00:21:20.378 "data_size": 65536 00:21:20.378 } 00:21:20.378 ] 00:21:20.378 }' 00:21:20.378 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.378 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.635 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:20.635 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:20.636 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:20.636 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:20.636 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:20.636 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:20.636 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:20.636 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.636 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.636 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:20.636 [2024-12-06 13:16:07.623545] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:20.636 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:20.894 "name": "Existed_Raid", 00:21:20.894 "aliases": [ 00:21:20.894 "e69b0758-55df-4dc9-b56f-19ce35769a49" 00:21:20.894 ], 00:21:20.894 "product_name": "Raid Volume", 00:21:20.894 "block_size": 512, 00:21:20.894 "num_blocks": 131072, 00:21:20.894 "uuid": "e69b0758-55df-4dc9-b56f-19ce35769a49", 00:21:20.894 "assigned_rate_limits": { 00:21:20.894 "rw_ios_per_sec": 0, 00:21:20.894 "rw_mbytes_per_sec": 0, 00:21:20.894 "r_mbytes_per_sec": 0, 00:21:20.894 "w_mbytes_per_sec": 0 00:21:20.894 }, 00:21:20.894 "claimed": false, 00:21:20.894 "zoned": false, 00:21:20.894 "supported_io_types": { 00:21:20.894 "read": true, 00:21:20.894 "write": true, 00:21:20.894 "unmap": false, 00:21:20.894 "flush": false, 00:21:20.894 "reset": true, 00:21:20.894 "nvme_admin": false, 00:21:20.894 "nvme_io": false, 00:21:20.894 "nvme_io_md": false, 00:21:20.894 "write_zeroes": true, 00:21:20.894 "zcopy": false, 00:21:20.894 "get_zone_info": false, 00:21:20.894 "zone_management": false, 00:21:20.894 "zone_append": false, 00:21:20.894 "compare": false, 00:21:20.894 "compare_and_write": false, 00:21:20.894 "abort": false, 00:21:20.894 "seek_hole": false, 00:21:20.894 "seek_data": false, 00:21:20.894 "copy": false, 00:21:20.894 "nvme_iov_md": false 00:21:20.894 }, 00:21:20.894 "driver_specific": { 00:21:20.894 "raid": { 00:21:20.894 "uuid": "e69b0758-55df-4dc9-b56f-19ce35769a49", 00:21:20.894 "strip_size_kb": 64, 00:21:20.894 "state": "online", 00:21:20.894 "raid_level": "raid5f", 00:21:20.894 "superblock": false, 00:21:20.894 "num_base_bdevs": 3, 00:21:20.894 "num_base_bdevs_discovered": 3, 00:21:20.894 "num_base_bdevs_operational": 3, 00:21:20.894 "base_bdevs_list": [ 00:21:20.894 { 00:21:20.894 "name": "NewBaseBdev", 00:21:20.894 "uuid": "dc6799fd-72d8-4d2e-b1dd-3b31d7e20dc3", 00:21:20.894 "is_configured": true, 00:21:20.894 "data_offset": 0, 00:21:20.894 "data_size": 65536 00:21:20.894 }, 00:21:20.894 { 00:21:20.894 "name": "BaseBdev2", 00:21:20.894 "uuid": "484c42d1-12d4-41c0-800e-76d90f1496f9", 00:21:20.894 "is_configured": true, 00:21:20.894 "data_offset": 0, 00:21:20.894 "data_size": 65536 00:21:20.894 }, 00:21:20.894 { 00:21:20.894 "name": "BaseBdev3", 00:21:20.894 "uuid": "f12c1be4-7c93-4cac-8a0f-1358e20f4b40", 00:21:20.894 "is_configured": true, 00:21:20.894 "data_offset": 0, 00:21:20.894 "data_size": 65536 00:21:20.894 } 00:21:20.894 ] 00:21:20.894 } 00:21:20.894 } 00:21:20.894 }' 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:20.894 BaseBdev2 00:21:20.894 BaseBdev3' 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.894 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.153 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:21.153 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:21.153 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:21.153 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.153 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.153 [2024-12-06 13:16:07.927292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:21.153 [2024-12-06 13:16:07.927334] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:21.153 [2024-12-06 13:16:07.927454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.153 [2024-12-06 13:16:07.927884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:21.154 [2024-12-06 13:16:07.927921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80577 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80577 ']' 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80577 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80577 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.154 killing process with pid 80577 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80577' 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80577 00:21:21.154 [2024-12-06 13:16:07.959978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:21.154 13:16:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80577 00:21:21.412 [2024-12-06 13:16:08.241235] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:22.787 13:16:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:22.787 00:21:22.787 real 0m11.893s 00:21:22.787 user 0m19.530s 00:21:22.787 sys 0m1.702s 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.788 ************************************ 00:21:22.788 END TEST raid5f_state_function_test 00:21:22.788 ************************************ 00:21:22.788 13:16:09 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:21:22.788 13:16:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:22.788 13:16:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.788 13:16:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:22.788 ************************************ 00:21:22.788 START TEST raid5f_state_function_test_sb 00:21:22.788 ************************************ 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81210 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:22.788 Process raid pid: 81210 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81210' 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81210 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81210 ']' 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.788 13:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.788 [2024-12-06 13:16:09.598256] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:21:22.788 [2024-12-06 13:16:09.598418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.788 [2024-12-06 13:16:09.776836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.046 [2024-12-06 13:16:09.928610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.304 [2024-12-06 13:16:10.164153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.304 [2024-12-06 13:16:10.164240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.870 [2024-12-06 13:16:10.633975] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:23.870 [2024-12-06 13:16:10.634076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:23.870 [2024-12-06 13:16:10.634096] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:23.870 [2024-12-06 13:16:10.634113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:23.870 [2024-12-06 13:16:10.634124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:23.870 [2024-12-06 13:16:10.634139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.870 "name": "Existed_Raid", 00:21:23.870 "uuid": "832d619e-76fa-4674-9db2-625e2ee0108f", 00:21:23.870 "strip_size_kb": 64, 00:21:23.870 "state": "configuring", 00:21:23.870 "raid_level": "raid5f", 00:21:23.870 "superblock": true, 00:21:23.870 "num_base_bdevs": 3, 00:21:23.870 "num_base_bdevs_discovered": 0, 00:21:23.870 "num_base_bdevs_operational": 3, 00:21:23.870 "base_bdevs_list": [ 00:21:23.870 { 00:21:23.870 "name": "BaseBdev1", 00:21:23.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.870 "is_configured": false, 00:21:23.870 "data_offset": 0, 00:21:23.870 "data_size": 0 00:21:23.870 }, 00:21:23.870 { 00:21:23.870 "name": "BaseBdev2", 00:21:23.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.870 "is_configured": false, 00:21:23.870 "data_offset": 0, 00:21:23.870 "data_size": 0 00:21:23.870 }, 00:21:23.870 { 00:21:23.870 "name": "BaseBdev3", 00:21:23.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.870 "is_configured": false, 00:21:23.870 "data_offset": 0, 00:21:23.870 "data_size": 0 00:21:23.870 } 00:21:23.870 ] 00:21:23.870 }' 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.870 13:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.439 [2024-12-06 13:16:11.173982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:24.439 [2024-12-06 13:16:11.174065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.439 [2024-12-06 13:16:11.181972] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:24.439 [2024-12-06 13:16:11.182040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:24.439 [2024-12-06 13:16:11.182061] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:24.439 [2024-12-06 13:16:11.182088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:24.439 [2024-12-06 13:16:11.182098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:24.439 [2024-12-06 13:16:11.182113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.439 [2024-12-06 13:16:11.228163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:24.439 BaseBdev1 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.439 [ 00:21:24.439 { 00:21:24.439 "name": "BaseBdev1", 00:21:24.439 "aliases": [ 00:21:24.439 "9f253059-4bed-43ce-b4b4-d39e5b0f59a0" 00:21:24.439 ], 00:21:24.439 "product_name": "Malloc disk", 00:21:24.439 "block_size": 512, 00:21:24.439 "num_blocks": 65536, 00:21:24.439 "uuid": "9f253059-4bed-43ce-b4b4-d39e5b0f59a0", 00:21:24.439 "assigned_rate_limits": { 00:21:24.439 "rw_ios_per_sec": 0, 00:21:24.439 "rw_mbytes_per_sec": 0, 00:21:24.439 "r_mbytes_per_sec": 0, 00:21:24.439 "w_mbytes_per_sec": 0 00:21:24.439 }, 00:21:24.439 "claimed": true, 00:21:24.439 "claim_type": "exclusive_write", 00:21:24.439 "zoned": false, 00:21:24.439 "supported_io_types": { 00:21:24.439 "read": true, 00:21:24.439 "write": true, 00:21:24.439 "unmap": true, 00:21:24.439 "flush": true, 00:21:24.439 "reset": true, 00:21:24.439 "nvme_admin": false, 00:21:24.439 "nvme_io": false, 00:21:24.439 "nvme_io_md": false, 00:21:24.439 "write_zeroes": true, 00:21:24.439 "zcopy": true, 00:21:24.439 "get_zone_info": false, 00:21:24.439 "zone_management": false, 00:21:24.439 "zone_append": false, 00:21:24.439 "compare": false, 00:21:24.439 "compare_and_write": false, 00:21:24.439 "abort": true, 00:21:24.439 "seek_hole": false, 00:21:24.439 "seek_data": false, 00:21:24.439 "copy": true, 00:21:24.439 "nvme_iov_md": false 00:21:24.439 }, 00:21:24.439 "memory_domains": [ 00:21:24.439 { 00:21:24.439 "dma_device_id": "system", 00:21:24.439 "dma_device_type": 1 00:21:24.439 }, 00:21:24.439 { 00:21:24.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.439 "dma_device_type": 2 00:21:24.439 } 00:21:24.439 ], 00:21:24.439 "driver_specific": {} 00:21:24.439 } 00:21:24.439 ] 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:24.439 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.440 "name": "Existed_Raid", 00:21:24.440 "uuid": "387e2206-81a0-4bc9-94c2-31b643faa6bb", 00:21:24.440 "strip_size_kb": 64, 00:21:24.440 "state": "configuring", 00:21:24.440 "raid_level": "raid5f", 00:21:24.440 "superblock": true, 00:21:24.440 "num_base_bdevs": 3, 00:21:24.440 "num_base_bdevs_discovered": 1, 00:21:24.440 "num_base_bdevs_operational": 3, 00:21:24.440 "base_bdevs_list": [ 00:21:24.440 { 00:21:24.440 "name": "BaseBdev1", 00:21:24.440 "uuid": "9f253059-4bed-43ce-b4b4-d39e5b0f59a0", 00:21:24.440 "is_configured": true, 00:21:24.440 "data_offset": 2048, 00:21:24.440 "data_size": 63488 00:21:24.440 }, 00:21:24.440 { 00:21:24.440 "name": "BaseBdev2", 00:21:24.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.440 "is_configured": false, 00:21:24.440 "data_offset": 0, 00:21:24.440 "data_size": 0 00:21:24.440 }, 00:21:24.440 { 00:21:24.440 "name": "BaseBdev3", 00:21:24.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.440 "is_configured": false, 00:21:24.440 "data_offset": 0, 00:21:24.440 "data_size": 0 00:21:24.440 } 00:21:24.440 ] 00:21:24.440 }' 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.440 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.004 [2024-12-06 13:16:11.768352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:25.004 [2024-12-06 13:16:11.768422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.004 [2024-12-06 13:16:11.776415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.004 [2024-12-06 13:16:11.779069] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:25.004 [2024-12-06 13:16:11.779303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:25.004 [2024-12-06 13:16:11.779332] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:25.004 [2024-12-06 13:16:11.779351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.004 "name": "Existed_Raid", 00:21:25.004 "uuid": "65c08b8a-7463-4fda-97e6-2d235b3a38a7", 00:21:25.004 "strip_size_kb": 64, 00:21:25.004 "state": "configuring", 00:21:25.004 "raid_level": "raid5f", 00:21:25.004 "superblock": true, 00:21:25.004 "num_base_bdevs": 3, 00:21:25.004 "num_base_bdevs_discovered": 1, 00:21:25.004 "num_base_bdevs_operational": 3, 00:21:25.004 "base_bdevs_list": [ 00:21:25.004 { 00:21:25.004 "name": "BaseBdev1", 00:21:25.004 "uuid": "9f253059-4bed-43ce-b4b4-d39e5b0f59a0", 00:21:25.004 "is_configured": true, 00:21:25.004 "data_offset": 2048, 00:21:25.004 "data_size": 63488 00:21:25.004 }, 00:21:25.004 { 00:21:25.004 "name": "BaseBdev2", 00:21:25.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.004 "is_configured": false, 00:21:25.004 "data_offset": 0, 00:21:25.004 "data_size": 0 00:21:25.004 }, 00:21:25.004 { 00:21:25.004 "name": "BaseBdev3", 00:21:25.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.004 "is_configured": false, 00:21:25.004 "data_offset": 0, 00:21:25.004 "data_size": 0 00:21:25.004 } 00:21:25.004 ] 00:21:25.004 }' 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.004 13:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.571 [2024-12-06 13:16:12.371402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:25.571 BaseBdev2 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.571 [ 00:21:25.571 { 00:21:25.571 "name": "BaseBdev2", 00:21:25.571 "aliases": [ 00:21:25.571 "4c6d555e-b290-422e-9c4b-316cb308de7f" 00:21:25.571 ], 00:21:25.571 "product_name": "Malloc disk", 00:21:25.571 "block_size": 512, 00:21:25.571 "num_blocks": 65536, 00:21:25.571 "uuid": "4c6d555e-b290-422e-9c4b-316cb308de7f", 00:21:25.571 "assigned_rate_limits": { 00:21:25.571 "rw_ios_per_sec": 0, 00:21:25.571 "rw_mbytes_per_sec": 0, 00:21:25.571 "r_mbytes_per_sec": 0, 00:21:25.571 "w_mbytes_per_sec": 0 00:21:25.571 }, 00:21:25.571 "claimed": true, 00:21:25.571 "claim_type": "exclusive_write", 00:21:25.571 "zoned": false, 00:21:25.571 "supported_io_types": { 00:21:25.571 "read": true, 00:21:25.571 "write": true, 00:21:25.571 "unmap": true, 00:21:25.571 "flush": true, 00:21:25.571 "reset": true, 00:21:25.571 "nvme_admin": false, 00:21:25.571 "nvme_io": false, 00:21:25.571 "nvme_io_md": false, 00:21:25.571 "write_zeroes": true, 00:21:25.571 "zcopy": true, 00:21:25.571 "get_zone_info": false, 00:21:25.571 "zone_management": false, 00:21:25.571 "zone_append": false, 00:21:25.571 "compare": false, 00:21:25.571 "compare_and_write": false, 00:21:25.571 "abort": true, 00:21:25.571 "seek_hole": false, 00:21:25.571 "seek_data": false, 00:21:25.571 "copy": true, 00:21:25.571 "nvme_iov_md": false 00:21:25.571 }, 00:21:25.571 "memory_domains": [ 00:21:25.571 { 00:21:25.571 "dma_device_id": "system", 00:21:25.571 "dma_device_type": 1 00:21:25.571 }, 00:21:25.571 { 00:21:25.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.571 "dma_device_type": 2 00:21:25.571 } 00:21:25.571 ], 00:21:25.571 "driver_specific": {} 00:21:25.571 } 00:21:25.571 ] 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.571 "name": "Existed_Raid", 00:21:25.571 "uuid": "65c08b8a-7463-4fda-97e6-2d235b3a38a7", 00:21:25.571 "strip_size_kb": 64, 00:21:25.571 "state": "configuring", 00:21:25.571 "raid_level": "raid5f", 00:21:25.571 "superblock": true, 00:21:25.571 "num_base_bdevs": 3, 00:21:25.571 "num_base_bdevs_discovered": 2, 00:21:25.571 "num_base_bdevs_operational": 3, 00:21:25.571 "base_bdevs_list": [ 00:21:25.571 { 00:21:25.571 "name": "BaseBdev1", 00:21:25.571 "uuid": "9f253059-4bed-43ce-b4b4-d39e5b0f59a0", 00:21:25.571 "is_configured": true, 00:21:25.571 "data_offset": 2048, 00:21:25.571 "data_size": 63488 00:21:25.571 }, 00:21:25.571 { 00:21:25.571 "name": "BaseBdev2", 00:21:25.571 "uuid": "4c6d555e-b290-422e-9c4b-316cb308de7f", 00:21:25.571 "is_configured": true, 00:21:25.571 "data_offset": 2048, 00:21:25.571 "data_size": 63488 00:21:25.571 }, 00:21:25.571 { 00:21:25.571 "name": "BaseBdev3", 00:21:25.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.571 "is_configured": false, 00:21:25.571 "data_offset": 0, 00:21:25.571 "data_size": 0 00:21:25.571 } 00:21:25.571 ] 00:21:25.571 }' 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.571 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.137 [2024-12-06 13:16:12.989695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:26.137 [2024-12-06 13:16:12.990040] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:26.137 [2024-12-06 13:16:12.990071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:26.137 BaseBdev3 00:21:26.137 [2024-12-06 13:16:12.990438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.137 13:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.137 [2024-12-06 13:16:12.995858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:26.137 [2024-12-06 13:16:12.996096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:26.137 [2024-12-06 13:16:12.996594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.137 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.137 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:26.137 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.137 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.137 [ 00:21:26.137 { 00:21:26.137 "name": "BaseBdev3", 00:21:26.137 "aliases": [ 00:21:26.137 "3a979b4c-c5e9-493c-888c-a7b2f098b562" 00:21:26.137 ], 00:21:26.137 "product_name": "Malloc disk", 00:21:26.137 "block_size": 512, 00:21:26.137 "num_blocks": 65536, 00:21:26.137 "uuid": "3a979b4c-c5e9-493c-888c-a7b2f098b562", 00:21:26.137 "assigned_rate_limits": { 00:21:26.137 "rw_ios_per_sec": 0, 00:21:26.137 "rw_mbytes_per_sec": 0, 00:21:26.137 "r_mbytes_per_sec": 0, 00:21:26.137 "w_mbytes_per_sec": 0 00:21:26.137 }, 00:21:26.137 "claimed": true, 00:21:26.137 "claim_type": "exclusive_write", 00:21:26.137 "zoned": false, 00:21:26.137 "supported_io_types": { 00:21:26.137 "read": true, 00:21:26.138 "write": true, 00:21:26.138 "unmap": true, 00:21:26.138 "flush": true, 00:21:26.138 "reset": true, 00:21:26.138 "nvme_admin": false, 00:21:26.138 "nvme_io": false, 00:21:26.138 "nvme_io_md": false, 00:21:26.138 "write_zeroes": true, 00:21:26.138 "zcopy": true, 00:21:26.138 "get_zone_info": false, 00:21:26.138 "zone_management": false, 00:21:26.138 "zone_append": false, 00:21:26.138 "compare": false, 00:21:26.138 "compare_and_write": false, 00:21:26.138 "abort": true, 00:21:26.138 "seek_hole": false, 00:21:26.138 "seek_data": false, 00:21:26.138 "copy": true, 00:21:26.138 "nvme_iov_md": false 00:21:26.138 }, 00:21:26.138 "memory_domains": [ 00:21:26.138 { 00:21:26.138 "dma_device_id": "system", 00:21:26.138 "dma_device_type": 1 00:21:26.138 }, 00:21:26.138 { 00:21:26.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.138 "dma_device_type": 2 00:21:26.138 } 00:21:26.138 ], 00:21:26.138 "driver_specific": {} 00:21:26.138 } 00:21:26.138 ] 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.138 "name": "Existed_Raid", 00:21:26.138 "uuid": "65c08b8a-7463-4fda-97e6-2d235b3a38a7", 00:21:26.138 "strip_size_kb": 64, 00:21:26.138 "state": "online", 00:21:26.138 "raid_level": "raid5f", 00:21:26.138 "superblock": true, 00:21:26.138 "num_base_bdevs": 3, 00:21:26.138 "num_base_bdevs_discovered": 3, 00:21:26.138 "num_base_bdevs_operational": 3, 00:21:26.138 "base_bdevs_list": [ 00:21:26.138 { 00:21:26.138 "name": "BaseBdev1", 00:21:26.138 "uuid": "9f253059-4bed-43ce-b4b4-d39e5b0f59a0", 00:21:26.138 "is_configured": true, 00:21:26.138 "data_offset": 2048, 00:21:26.138 "data_size": 63488 00:21:26.138 }, 00:21:26.138 { 00:21:26.138 "name": "BaseBdev2", 00:21:26.138 "uuid": "4c6d555e-b290-422e-9c4b-316cb308de7f", 00:21:26.138 "is_configured": true, 00:21:26.138 "data_offset": 2048, 00:21:26.138 "data_size": 63488 00:21:26.138 }, 00:21:26.138 { 00:21:26.138 "name": "BaseBdev3", 00:21:26.138 "uuid": "3a979b4c-c5e9-493c-888c-a7b2f098b562", 00:21:26.138 "is_configured": true, 00:21:26.138 "data_offset": 2048, 00:21:26.138 "data_size": 63488 00:21:26.138 } 00:21:26.138 ] 00:21:26.138 }' 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.138 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.705 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:26.705 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:26.705 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:26.705 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:26.705 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:26.705 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:26.705 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:26.705 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:26.705 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.705 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.705 [2024-12-06 13:16:13.558830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:26.706 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.706 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:26.706 "name": "Existed_Raid", 00:21:26.706 "aliases": [ 00:21:26.706 "65c08b8a-7463-4fda-97e6-2d235b3a38a7" 00:21:26.706 ], 00:21:26.706 "product_name": "Raid Volume", 00:21:26.706 "block_size": 512, 00:21:26.706 "num_blocks": 126976, 00:21:26.706 "uuid": "65c08b8a-7463-4fda-97e6-2d235b3a38a7", 00:21:26.706 "assigned_rate_limits": { 00:21:26.706 "rw_ios_per_sec": 0, 00:21:26.706 "rw_mbytes_per_sec": 0, 00:21:26.706 "r_mbytes_per_sec": 0, 00:21:26.706 "w_mbytes_per_sec": 0 00:21:26.706 }, 00:21:26.706 "claimed": false, 00:21:26.706 "zoned": false, 00:21:26.706 "supported_io_types": { 00:21:26.706 "read": true, 00:21:26.706 "write": true, 00:21:26.706 "unmap": false, 00:21:26.706 "flush": false, 00:21:26.706 "reset": true, 00:21:26.706 "nvme_admin": false, 00:21:26.706 "nvme_io": false, 00:21:26.706 "nvme_io_md": false, 00:21:26.706 "write_zeroes": true, 00:21:26.706 "zcopy": false, 00:21:26.706 "get_zone_info": false, 00:21:26.706 "zone_management": false, 00:21:26.706 "zone_append": false, 00:21:26.706 "compare": false, 00:21:26.706 "compare_and_write": false, 00:21:26.706 "abort": false, 00:21:26.706 "seek_hole": false, 00:21:26.706 "seek_data": false, 00:21:26.706 "copy": false, 00:21:26.706 "nvme_iov_md": false 00:21:26.706 }, 00:21:26.706 "driver_specific": { 00:21:26.706 "raid": { 00:21:26.706 "uuid": "65c08b8a-7463-4fda-97e6-2d235b3a38a7", 00:21:26.706 "strip_size_kb": 64, 00:21:26.706 "state": "online", 00:21:26.706 "raid_level": "raid5f", 00:21:26.706 "superblock": true, 00:21:26.706 "num_base_bdevs": 3, 00:21:26.706 "num_base_bdevs_discovered": 3, 00:21:26.706 "num_base_bdevs_operational": 3, 00:21:26.706 "base_bdevs_list": [ 00:21:26.706 { 00:21:26.706 "name": "BaseBdev1", 00:21:26.706 "uuid": "9f253059-4bed-43ce-b4b4-d39e5b0f59a0", 00:21:26.706 "is_configured": true, 00:21:26.706 "data_offset": 2048, 00:21:26.706 "data_size": 63488 00:21:26.706 }, 00:21:26.706 { 00:21:26.706 "name": "BaseBdev2", 00:21:26.706 "uuid": "4c6d555e-b290-422e-9c4b-316cb308de7f", 00:21:26.706 "is_configured": true, 00:21:26.706 "data_offset": 2048, 00:21:26.706 "data_size": 63488 00:21:26.706 }, 00:21:26.706 { 00:21:26.706 "name": "BaseBdev3", 00:21:26.706 "uuid": "3a979b4c-c5e9-493c-888c-a7b2f098b562", 00:21:26.706 "is_configured": true, 00:21:26.706 "data_offset": 2048, 00:21:26.706 "data_size": 63488 00:21:26.706 } 00:21:26.706 ] 00:21:26.706 } 00:21:26.706 } 00:21:26.706 }' 00:21:26.706 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:26.706 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:26.706 BaseBdev2 00:21:26.706 BaseBdev3' 00:21:26.706 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.965 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.965 [2024-12-06 13:16:13.894697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.224 13:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.224 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.224 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.224 "name": "Existed_Raid", 00:21:27.224 "uuid": "65c08b8a-7463-4fda-97e6-2d235b3a38a7", 00:21:27.224 "strip_size_kb": 64, 00:21:27.224 "state": "online", 00:21:27.224 "raid_level": "raid5f", 00:21:27.224 "superblock": true, 00:21:27.224 "num_base_bdevs": 3, 00:21:27.224 "num_base_bdevs_discovered": 2, 00:21:27.224 "num_base_bdevs_operational": 2, 00:21:27.224 "base_bdevs_list": [ 00:21:27.224 { 00:21:27.224 "name": null, 00:21:27.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.224 "is_configured": false, 00:21:27.224 "data_offset": 0, 00:21:27.224 "data_size": 63488 00:21:27.224 }, 00:21:27.224 { 00:21:27.224 "name": "BaseBdev2", 00:21:27.224 "uuid": "4c6d555e-b290-422e-9c4b-316cb308de7f", 00:21:27.224 "is_configured": true, 00:21:27.224 "data_offset": 2048, 00:21:27.224 "data_size": 63488 00:21:27.224 }, 00:21:27.224 { 00:21:27.224 "name": "BaseBdev3", 00:21:27.224 "uuid": "3a979b4c-c5e9-493c-888c-a7b2f098b562", 00:21:27.224 "is_configured": true, 00:21:27.224 "data_offset": 2048, 00:21:27.224 "data_size": 63488 00:21:27.224 } 00:21:27.224 ] 00:21:27.224 }' 00:21:27.224 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.224 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.792 [2024-12-06 13:16:14.602194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:27.792 [2024-12-06 13:16:14.602419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:27.792 [2024-12-06 13:16:14.687738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.792 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.792 [2024-12-06 13:16:14.751820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:27.792 [2024-12-06 13:16:14.751925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.051 BaseBdev2 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.051 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.051 [ 00:21:28.051 { 00:21:28.051 "name": "BaseBdev2", 00:21:28.051 "aliases": [ 00:21:28.051 "8b95ae2f-fe58-4a67-98a4-b386a1c3a745" 00:21:28.051 ], 00:21:28.051 "product_name": "Malloc disk", 00:21:28.052 "block_size": 512, 00:21:28.052 "num_blocks": 65536, 00:21:28.052 "uuid": "8b95ae2f-fe58-4a67-98a4-b386a1c3a745", 00:21:28.052 "assigned_rate_limits": { 00:21:28.052 "rw_ios_per_sec": 0, 00:21:28.052 "rw_mbytes_per_sec": 0, 00:21:28.052 "r_mbytes_per_sec": 0, 00:21:28.052 "w_mbytes_per_sec": 0 00:21:28.052 }, 00:21:28.052 "claimed": false, 00:21:28.052 "zoned": false, 00:21:28.052 "supported_io_types": { 00:21:28.052 "read": true, 00:21:28.052 "write": true, 00:21:28.052 "unmap": true, 00:21:28.052 "flush": true, 00:21:28.052 "reset": true, 00:21:28.052 "nvme_admin": false, 00:21:28.052 "nvme_io": false, 00:21:28.052 "nvme_io_md": false, 00:21:28.052 "write_zeroes": true, 00:21:28.052 "zcopy": true, 00:21:28.052 "get_zone_info": false, 00:21:28.052 "zone_management": false, 00:21:28.052 "zone_append": false, 00:21:28.052 "compare": false, 00:21:28.052 "compare_and_write": false, 00:21:28.052 "abort": true, 00:21:28.052 "seek_hole": false, 00:21:28.052 "seek_data": false, 00:21:28.052 "copy": true, 00:21:28.052 "nvme_iov_md": false 00:21:28.052 }, 00:21:28.052 "memory_domains": [ 00:21:28.052 { 00:21:28.052 "dma_device_id": "system", 00:21:28.052 "dma_device_type": 1 00:21:28.052 }, 00:21:28.052 { 00:21:28.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.052 "dma_device_type": 2 00:21:28.052 } 00:21:28.052 ], 00:21:28.052 "driver_specific": {} 00:21:28.052 } 00:21:28.052 ] 00:21:28.052 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.052 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:28.052 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:28.052 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:28.052 13:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:28.052 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.052 13:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.052 BaseBdev3 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.052 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.052 [ 00:21:28.052 { 00:21:28.052 "name": "BaseBdev3", 00:21:28.052 "aliases": [ 00:21:28.052 "62aa95cb-71cf-480a-af71-420cf7d97538" 00:21:28.052 ], 00:21:28.052 "product_name": "Malloc disk", 00:21:28.052 "block_size": 512, 00:21:28.052 "num_blocks": 65536, 00:21:28.052 "uuid": "62aa95cb-71cf-480a-af71-420cf7d97538", 00:21:28.052 "assigned_rate_limits": { 00:21:28.052 "rw_ios_per_sec": 0, 00:21:28.052 "rw_mbytes_per_sec": 0, 00:21:28.052 "r_mbytes_per_sec": 0, 00:21:28.052 "w_mbytes_per_sec": 0 00:21:28.052 }, 00:21:28.052 "claimed": false, 00:21:28.052 "zoned": false, 00:21:28.052 "supported_io_types": { 00:21:28.052 "read": true, 00:21:28.052 "write": true, 00:21:28.052 "unmap": true, 00:21:28.052 "flush": true, 00:21:28.052 "reset": true, 00:21:28.052 "nvme_admin": false, 00:21:28.052 "nvme_io": false, 00:21:28.052 "nvme_io_md": false, 00:21:28.052 "write_zeroes": true, 00:21:28.052 "zcopy": true, 00:21:28.052 "get_zone_info": false, 00:21:28.052 "zone_management": false, 00:21:28.052 "zone_append": false, 00:21:28.052 "compare": false, 00:21:28.052 "compare_and_write": false, 00:21:28.052 "abort": true, 00:21:28.052 "seek_hole": false, 00:21:28.052 "seek_data": false, 00:21:28.052 "copy": true, 00:21:28.052 "nvme_iov_md": false 00:21:28.052 }, 00:21:28.052 "memory_domains": [ 00:21:28.052 { 00:21:28.052 "dma_device_id": "system", 00:21:28.052 "dma_device_type": 1 00:21:28.315 }, 00:21:28.315 { 00:21:28.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.315 "dma_device_type": 2 00:21:28.315 } 00:21:28.315 ], 00:21:28.315 "driver_specific": {} 00:21:28.315 } 00:21:28.315 ] 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.315 [2024-12-06 13:16:15.073531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:28.315 [2024-12-06 13:16:15.073808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:28.315 [2024-12-06 13:16:15.073977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.315 [2024-12-06 13:16:15.076571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.315 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.316 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.316 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.316 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.316 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.316 "name": "Existed_Raid", 00:21:28.316 "uuid": "e0307234-5c31-4f10-a3aa-155662aef57e", 00:21:28.316 "strip_size_kb": 64, 00:21:28.316 "state": "configuring", 00:21:28.316 "raid_level": "raid5f", 00:21:28.316 "superblock": true, 00:21:28.316 "num_base_bdevs": 3, 00:21:28.316 "num_base_bdevs_discovered": 2, 00:21:28.316 "num_base_bdevs_operational": 3, 00:21:28.316 "base_bdevs_list": [ 00:21:28.316 { 00:21:28.316 "name": "BaseBdev1", 00:21:28.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.316 "is_configured": false, 00:21:28.316 "data_offset": 0, 00:21:28.316 "data_size": 0 00:21:28.316 }, 00:21:28.316 { 00:21:28.316 "name": "BaseBdev2", 00:21:28.316 "uuid": "8b95ae2f-fe58-4a67-98a4-b386a1c3a745", 00:21:28.316 "is_configured": true, 00:21:28.316 "data_offset": 2048, 00:21:28.316 "data_size": 63488 00:21:28.316 }, 00:21:28.316 { 00:21:28.316 "name": "BaseBdev3", 00:21:28.316 "uuid": "62aa95cb-71cf-480a-af71-420cf7d97538", 00:21:28.316 "is_configured": true, 00:21:28.316 "data_offset": 2048, 00:21:28.316 "data_size": 63488 00:21:28.316 } 00:21:28.316 ] 00:21:28.316 }' 00:21:28.316 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.316 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.945 [2024-12-06 13:16:15.617732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.945 "name": "Existed_Raid", 00:21:28.945 "uuid": "e0307234-5c31-4f10-a3aa-155662aef57e", 00:21:28.945 "strip_size_kb": 64, 00:21:28.945 "state": "configuring", 00:21:28.945 "raid_level": "raid5f", 00:21:28.945 "superblock": true, 00:21:28.945 "num_base_bdevs": 3, 00:21:28.945 "num_base_bdevs_discovered": 1, 00:21:28.945 "num_base_bdevs_operational": 3, 00:21:28.945 "base_bdevs_list": [ 00:21:28.945 { 00:21:28.945 "name": "BaseBdev1", 00:21:28.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.945 "is_configured": false, 00:21:28.945 "data_offset": 0, 00:21:28.945 "data_size": 0 00:21:28.945 }, 00:21:28.945 { 00:21:28.945 "name": null, 00:21:28.945 "uuid": "8b95ae2f-fe58-4a67-98a4-b386a1c3a745", 00:21:28.945 "is_configured": false, 00:21:28.945 "data_offset": 0, 00:21:28.945 "data_size": 63488 00:21:28.945 }, 00:21:28.945 { 00:21:28.945 "name": "BaseBdev3", 00:21:28.945 "uuid": "62aa95cb-71cf-480a-af71-420cf7d97538", 00:21:28.945 "is_configured": true, 00:21:28.945 "data_offset": 2048, 00:21:28.945 "data_size": 63488 00:21:28.945 } 00:21:28.945 ] 00:21:28.945 }' 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.945 13:16:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.203 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:29.203 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.203 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.203 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.203 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.203 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:29.204 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:29.204 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.204 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.462 [2024-12-06 13:16:16.244547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:29.462 BaseBdev1 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.462 [ 00:21:29.462 { 00:21:29.462 "name": "BaseBdev1", 00:21:29.462 "aliases": [ 00:21:29.462 "b11cd8e2-6005-48a1-b97b-df571fb1ef1b" 00:21:29.462 ], 00:21:29.462 "product_name": "Malloc disk", 00:21:29.462 "block_size": 512, 00:21:29.462 "num_blocks": 65536, 00:21:29.462 "uuid": "b11cd8e2-6005-48a1-b97b-df571fb1ef1b", 00:21:29.462 "assigned_rate_limits": { 00:21:29.462 "rw_ios_per_sec": 0, 00:21:29.462 "rw_mbytes_per_sec": 0, 00:21:29.462 "r_mbytes_per_sec": 0, 00:21:29.462 "w_mbytes_per_sec": 0 00:21:29.462 }, 00:21:29.462 "claimed": true, 00:21:29.462 "claim_type": "exclusive_write", 00:21:29.462 "zoned": false, 00:21:29.462 "supported_io_types": { 00:21:29.462 "read": true, 00:21:29.462 "write": true, 00:21:29.462 "unmap": true, 00:21:29.462 "flush": true, 00:21:29.462 "reset": true, 00:21:29.462 "nvme_admin": false, 00:21:29.462 "nvme_io": false, 00:21:29.462 "nvme_io_md": false, 00:21:29.462 "write_zeroes": true, 00:21:29.462 "zcopy": true, 00:21:29.462 "get_zone_info": false, 00:21:29.462 "zone_management": false, 00:21:29.462 "zone_append": false, 00:21:29.462 "compare": false, 00:21:29.462 "compare_and_write": false, 00:21:29.462 "abort": true, 00:21:29.462 "seek_hole": false, 00:21:29.462 "seek_data": false, 00:21:29.462 "copy": true, 00:21:29.462 "nvme_iov_md": false 00:21:29.462 }, 00:21:29.462 "memory_domains": [ 00:21:29.462 { 00:21:29.462 "dma_device_id": "system", 00:21:29.462 "dma_device_type": 1 00:21:29.462 }, 00:21:29.462 { 00:21:29.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:29.462 "dma_device_type": 2 00:21:29.462 } 00:21:29.462 ], 00:21:29.462 "driver_specific": {} 00:21:29.462 } 00:21:29.462 ] 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.462 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.463 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.463 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.463 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.463 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.463 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.463 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.463 "name": "Existed_Raid", 00:21:29.463 "uuid": "e0307234-5c31-4f10-a3aa-155662aef57e", 00:21:29.463 "strip_size_kb": 64, 00:21:29.463 "state": "configuring", 00:21:29.463 "raid_level": "raid5f", 00:21:29.463 "superblock": true, 00:21:29.463 "num_base_bdevs": 3, 00:21:29.463 "num_base_bdevs_discovered": 2, 00:21:29.463 "num_base_bdevs_operational": 3, 00:21:29.463 "base_bdevs_list": [ 00:21:29.463 { 00:21:29.463 "name": "BaseBdev1", 00:21:29.463 "uuid": "b11cd8e2-6005-48a1-b97b-df571fb1ef1b", 00:21:29.463 "is_configured": true, 00:21:29.463 "data_offset": 2048, 00:21:29.463 "data_size": 63488 00:21:29.463 }, 00:21:29.463 { 00:21:29.463 "name": null, 00:21:29.463 "uuid": "8b95ae2f-fe58-4a67-98a4-b386a1c3a745", 00:21:29.463 "is_configured": false, 00:21:29.463 "data_offset": 0, 00:21:29.463 "data_size": 63488 00:21:29.463 }, 00:21:29.463 { 00:21:29.463 "name": "BaseBdev3", 00:21:29.463 "uuid": "62aa95cb-71cf-480a-af71-420cf7d97538", 00:21:29.463 "is_configured": true, 00:21:29.463 "data_offset": 2048, 00:21:29.463 "data_size": 63488 00:21:29.463 } 00:21:29.463 ] 00:21:29.463 }' 00:21:29.463 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.463 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.030 [2024-12-06 13:16:16.872800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.030 "name": "Existed_Raid", 00:21:30.030 "uuid": "e0307234-5c31-4f10-a3aa-155662aef57e", 00:21:30.030 "strip_size_kb": 64, 00:21:30.030 "state": "configuring", 00:21:30.030 "raid_level": "raid5f", 00:21:30.030 "superblock": true, 00:21:30.030 "num_base_bdevs": 3, 00:21:30.030 "num_base_bdevs_discovered": 1, 00:21:30.030 "num_base_bdevs_operational": 3, 00:21:30.030 "base_bdevs_list": [ 00:21:30.030 { 00:21:30.030 "name": "BaseBdev1", 00:21:30.030 "uuid": "b11cd8e2-6005-48a1-b97b-df571fb1ef1b", 00:21:30.030 "is_configured": true, 00:21:30.030 "data_offset": 2048, 00:21:30.030 "data_size": 63488 00:21:30.030 }, 00:21:30.030 { 00:21:30.030 "name": null, 00:21:30.030 "uuid": "8b95ae2f-fe58-4a67-98a4-b386a1c3a745", 00:21:30.030 "is_configured": false, 00:21:30.030 "data_offset": 0, 00:21:30.030 "data_size": 63488 00:21:30.030 }, 00:21:30.030 { 00:21:30.030 "name": null, 00:21:30.030 "uuid": "62aa95cb-71cf-480a-af71-420cf7d97538", 00:21:30.030 "is_configured": false, 00:21:30.030 "data_offset": 0, 00:21:30.030 "data_size": 63488 00:21:30.030 } 00:21:30.030 ] 00:21:30.030 }' 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.030 13:16:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.597 [2024-12-06 13:16:17.469000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.597 "name": "Existed_Raid", 00:21:30.597 "uuid": "e0307234-5c31-4f10-a3aa-155662aef57e", 00:21:30.597 "strip_size_kb": 64, 00:21:30.597 "state": "configuring", 00:21:30.597 "raid_level": "raid5f", 00:21:30.597 "superblock": true, 00:21:30.597 "num_base_bdevs": 3, 00:21:30.597 "num_base_bdevs_discovered": 2, 00:21:30.597 "num_base_bdevs_operational": 3, 00:21:30.597 "base_bdevs_list": [ 00:21:30.597 { 00:21:30.597 "name": "BaseBdev1", 00:21:30.597 "uuid": "b11cd8e2-6005-48a1-b97b-df571fb1ef1b", 00:21:30.597 "is_configured": true, 00:21:30.597 "data_offset": 2048, 00:21:30.597 "data_size": 63488 00:21:30.597 }, 00:21:30.597 { 00:21:30.597 "name": null, 00:21:30.597 "uuid": "8b95ae2f-fe58-4a67-98a4-b386a1c3a745", 00:21:30.597 "is_configured": false, 00:21:30.597 "data_offset": 0, 00:21:30.597 "data_size": 63488 00:21:30.597 }, 00:21:30.597 { 00:21:30.597 "name": "BaseBdev3", 00:21:30.597 "uuid": "62aa95cb-71cf-480a-af71-420cf7d97538", 00:21:30.597 "is_configured": true, 00:21:30.597 "data_offset": 2048, 00:21:30.597 "data_size": 63488 00:21:30.597 } 00:21:30.597 ] 00:21:30.597 }' 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.597 13:16:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.164 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:31.164 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.164 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.164 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.164 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.164 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:31.164 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:31.164 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.164 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.165 [2024-12-06 13:16:18.089204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.422 "name": "Existed_Raid", 00:21:31.422 "uuid": "e0307234-5c31-4f10-a3aa-155662aef57e", 00:21:31.422 "strip_size_kb": 64, 00:21:31.422 "state": "configuring", 00:21:31.422 "raid_level": "raid5f", 00:21:31.422 "superblock": true, 00:21:31.422 "num_base_bdevs": 3, 00:21:31.422 "num_base_bdevs_discovered": 1, 00:21:31.422 "num_base_bdevs_operational": 3, 00:21:31.422 "base_bdevs_list": [ 00:21:31.422 { 00:21:31.422 "name": null, 00:21:31.422 "uuid": "b11cd8e2-6005-48a1-b97b-df571fb1ef1b", 00:21:31.422 "is_configured": false, 00:21:31.422 "data_offset": 0, 00:21:31.422 "data_size": 63488 00:21:31.422 }, 00:21:31.422 { 00:21:31.422 "name": null, 00:21:31.422 "uuid": "8b95ae2f-fe58-4a67-98a4-b386a1c3a745", 00:21:31.422 "is_configured": false, 00:21:31.422 "data_offset": 0, 00:21:31.422 "data_size": 63488 00:21:31.422 }, 00:21:31.422 { 00:21:31.422 "name": "BaseBdev3", 00:21:31.422 "uuid": "62aa95cb-71cf-480a-af71-420cf7d97538", 00:21:31.422 "is_configured": true, 00:21:31.422 "data_offset": 2048, 00:21:31.422 "data_size": 63488 00:21:31.422 } 00:21:31.422 ] 00:21:31.422 }' 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.422 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.985 [2024-12-06 13:16:18.748178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.985 "name": "Existed_Raid", 00:21:31.985 "uuid": "e0307234-5c31-4f10-a3aa-155662aef57e", 00:21:31.985 "strip_size_kb": 64, 00:21:31.985 "state": "configuring", 00:21:31.985 "raid_level": "raid5f", 00:21:31.985 "superblock": true, 00:21:31.985 "num_base_bdevs": 3, 00:21:31.985 "num_base_bdevs_discovered": 2, 00:21:31.985 "num_base_bdevs_operational": 3, 00:21:31.985 "base_bdevs_list": [ 00:21:31.985 { 00:21:31.985 "name": null, 00:21:31.985 "uuid": "b11cd8e2-6005-48a1-b97b-df571fb1ef1b", 00:21:31.985 "is_configured": false, 00:21:31.985 "data_offset": 0, 00:21:31.985 "data_size": 63488 00:21:31.985 }, 00:21:31.985 { 00:21:31.985 "name": "BaseBdev2", 00:21:31.985 "uuid": "8b95ae2f-fe58-4a67-98a4-b386a1c3a745", 00:21:31.985 "is_configured": true, 00:21:31.985 "data_offset": 2048, 00:21:31.985 "data_size": 63488 00:21:31.985 }, 00:21:31.985 { 00:21:31.985 "name": "BaseBdev3", 00:21:31.985 "uuid": "62aa95cb-71cf-480a-af71-420cf7d97538", 00:21:31.985 "is_configured": true, 00:21:31.985 "data_offset": 2048, 00:21:31.985 "data_size": 63488 00:21:31.985 } 00:21:31.985 ] 00:21:31.985 }' 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.985 13:16:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b11cd8e2-6005-48a1-b97b-df571fb1ef1b 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.550 [2024-12-06 13:16:19.404875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:32.550 [2024-12-06 13:16:19.405224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:32.550 [2024-12-06 13:16:19.405250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:32.550 [2024-12-06 13:16:19.405634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:32.550 NewBaseBdev 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.550 [2024-12-06 13:16:19.410939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:32.550 [2024-12-06 13:16:19.410966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:32.550 [2024-12-06 13:16:19.411321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.550 [ 00:21:32.550 { 00:21:32.550 "name": "NewBaseBdev", 00:21:32.550 "aliases": [ 00:21:32.550 "b11cd8e2-6005-48a1-b97b-df571fb1ef1b" 00:21:32.550 ], 00:21:32.550 "product_name": "Malloc disk", 00:21:32.550 "block_size": 512, 00:21:32.550 "num_blocks": 65536, 00:21:32.550 "uuid": "b11cd8e2-6005-48a1-b97b-df571fb1ef1b", 00:21:32.550 "assigned_rate_limits": { 00:21:32.550 "rw_ios_per_sec": 0, 00:21:32.550 "rw_mbytes_per_sec": 0, 00:21:32.550 "r_mbytes_per_sec": 0, 00:21:32.550 "w_mbytes_per_sec": 0 00:21:32.550 }, 00:21:32.550 "claimed": true, 00:21:32.550 "claim_type": "exclusive_write", 00:21:32.550 "zoned": false, 00:21:32.550 "supported_io_types": { 00:21:32.550 "read": true, 00:21:32.550 "write": true, 00:21:32.550 "unmap": true, 00:21:32.550 "flush": true, 00:21:32.550 "reset": true, 00:21:32.550 "nvme_admin": false, 00:21:32.550 "nvme_io": false, 00:21:32.550 "nvme_io_md": false, 00:21:32.550 "write_zeroes": true, 00:21:32.550 "zcopy": true, 00:21:32.550 "get_zone_info": false, 00:21:32.550 "zone_management": false, 00:21:32.550 "zone_append": false, 00:21:32.550 "compare": false, 00:21:32.550 "compare_and_write": false, 00:21:32.550 "abort": true, 00:21:32.550 "seek_hole": false, 00:21:32.550 "seek_data": false, 00:21:32.550 "copy": true, 00:21:32.550 "nvme_iov_md": false 00:21:32.550 }, 00:21:32.550 "memory_domains": [ 00:21:32.550 { 00:21:32.550 "dma_device_id": "system", 00:21:32.550 "dma_device_type": 1 00:21:32.550 }, 00:21:32.550 { 00:21:32.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.550 "dma_device_type": 2 00:21:32.550 } 00:21:32.550 ], 00:21:32.550 "driver_specific": {} 00:21:32.550 } 00:21:32.550 ] 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.550 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.551 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.551 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.551 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.551 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.551 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.551 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.551 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.551 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.551 "name": "Existed_Raid", 00:21:32.551 "uuid": "e0307234-5c31-4f10-a3aa-155662aef57e", 00:21:32.551 "strip_size_kb": 64, 00:21:32.551 "state": "online", 00:21:32.551 "raid_level": "raid5f", 00:21:32.551 "superblock": true, 00:21:32.551 "num_base_bdevs": 3, 00:21:32.551 "num_base_bdevs_discovered": 3, 00:21:32.551 "num_base_bdevs_operational": 3, 00:21:32.551 "base_bdevs_list": [ 00:21:32.551 { 00:21:32.551 "name": "NewBaseBdev", 00:21:32.551 "uuid": "b11cd8e2-6005-48a1-b97b-df571fb1ef1b", 00:21:32.551 "is_configured": true, 00:21:32.551 "data_offset": 2048, 00:21:32.551 "data_size": 63488 00:21:32.551 }, 00:21:32.551 { 00:21:32.551 "name": "BaseBdev2", 00:21:32.551 "uuid": "8b95ae2f-fe58-4a67-98a4-b386a1c3a745", 00:21:32.551 "is_configured": true, 00:21:32.551 "data_offset": 2048, 00:21:32.551 "data_size": 63488 00:21:32.551 }, 00:21:32.551 { 00:21:32.551 "name": "BaseBdev3", 00:21:32.551 "uuid": "62aa95cb-71cf-480a-af71-420cf7d97538", 00:21:32.551 "is_configured": true, 00:21:32.551 "data_offset": 2048, 00:21:32.551 "data_size": 63488 00:21:32.551 } 00:21:32.551 ] 00:21:32.551 }' 00:21:32.551 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.551 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.120 [2024-12-06 13:16:19.966186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:33.120 "name": "Existed_Raid", 00:21:33.120 "aliases": [ 00:21:33.120 "e0307234-5c31-4f10-a3aa-155662aef57e" 00:21:33.120 ], 00:21:33.120 "product_name": "Raid Volume", 00:21:33.120 "block_size": 512, 00:21:33.120 "num_blocks": 126976, 00:21:33.120 "uuid": "e0307234-5c31-4f10-a3aa-155662aef57e", 00:21:33.120 "assigned_rate_limits": { 00:21:33.120 "rw_ios_per_sec": 0, 00:21:33.120 "rw_mbytes_per_sec": 0, 00:21:33.120 "r_mbytes_per_sec": 0, 00:21:33.120 "w_mbytes_per_sec": 0 00:21:33.120 }, 00:21:33.120 "claimed": false, 00:21:33.120 "zoned": false, 00:21:33.120 "supported_io_types": { 00:21:33.120 "read": true, 00:21:33.120 "write": true, 00:21:33.120 "unmap": false, 00:21:33.120 "flush": false, 00:21:33.120 "reset": true, 00:21:33.120 "nvme_admin": false, 00:21:33.120 "nvme_io": false, 00:21:33.120 "nvme_io_md": false, 00:21:33.120 "write_zeroes": true, 00:21:33.120 "zcopy": false, 00:21:33.120 "get_zone_info": false, 00:21:33.120 "zone_management": false, 00:21:33.120 "zone_append": false, 00:21:33.120 "compare": false, 00:21:33.120 "compare_and_write": false, 00:21:33.120 "abort": false, 00:21:33.120 "seek_hole": false, 00:21:33.120 "seek_data": false, 00:21:33.120 "copy": false, 00:21:33.120 "nvme_iov_md": false 00:21:33.120 }, 00:21:33.120 "driver_specific": { 00:21:33.120 "raid": { 00:21:33.120 "uuid": "e0307234-5c31-4f10-a3aa-155662aef57e", 00:21:33.120 "strip_size_kb": 64, 00:21:33.120 "state": "online", 00:21:33.120 "raid_level": "raid5f", 00:21:33.120 "superblock": true, 00:21:33.120 "num_base_bdevs": 3, 00:21:33.120 "num_base_bdevs_discovered": 3, 00:21:33.120 "num_base_bdevs_operational": 3, 00:21:33.120 "base_bdevs_list": [ 00:21:33.120 { 00:21:33.120 "name": "NewBaseBdev", 00:21:33.120 "uuid": "b11cd8e2-6005-48a1-b97b-df571fb1ef1b", 00:21:33.120 "is_configured": true, 00:21:33.120 "data_offset": 2048, 00:21:33.120 "data_size": 63488 00:21:33.120 }, 00:21:33.120 { 00:21:33.120 "name": "BaseBdev2", 00:21:33.120 "uuid": "8b95ae2f-fe58-4a67-98a4-b386a1c3a745", 00:21:33.120 "is_configured": true, 00:21:33.120 "data_offset": 2048, 00:21:33.120 "data_size": 63488 00:21:33.120 }, 00:21:33.120 { 00:21:33.120 "name": "BaseBdev3", 00:21:33.120 "uuid": "62aa95cb-71cf-480a-af71-420cf7d97538", 00:21:33.120 "is_configured": true, 00:21:33.120 "data_offset": 2048, 00:21:33.120 "data_size": 63488 00:21:33.120 } 00:21:33.120 ] 00:21:33.120 } 00:21:33.120 } 00:21:33.120 }' 00:21:33.120 13:16:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:33.120 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:33.121 BaseBdev2 00:21:33.121 BaseBdev3' 00:21:33.121 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.121 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:33.121 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:33.121 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:33.121 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.121 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.121 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.121 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.441 [2024-12-06 13:16:20.261982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:33.441 [2024-12-06 13:16:20.262020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:33.441 [2024-12-06 13:16:20.262164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:33.441 [2024-12-06 13:16:20.262674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:33.441 [2024-12-06 13:16:20.262701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81210 00:21:33.441 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81210 ']' 00:21:33.442 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81210 00:21:33.442 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:33.442 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.442 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81210 00:21:33.442 killing process with pid 81210 00:21:33.442 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:33.442 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:33.442 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81210' 00:21:33.442 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81210 00:21:33.442 [2024-12-06 13:16:20.295911] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:33.442 13:16:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81210 00:21:33.701 [2024-12-06 13:16:20.599962] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:35.075 13:16:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:35.075 00:21:35.075 real 0m12.255s 00:21:35.075 user 0m20.130s 00:21:35.075 sys 0m1.834s 00:21:35.075 ************************************ 00:21:35.075 END TEST raid5f_state_function_test_sb 00:21:35.075 ************************************ 00:21:35.075 13:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.075 13:16:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.075 13:16:21 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:21:35.075 13:16:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:35.075 13:16:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.075 13:16:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.075 ************************************ 00:21:35.075 START TEST raid5f_superblock_test 00:21:35.075 ************************************ 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81845 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81845 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81845 ']' 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.075 13:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.075 [2024-12-06 13:16:21.904451] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:21:35.075 [2024-12-06 13:16:21.904932] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81845 ] 00:21:35.075 [2024-12-06 13:16:22.082635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.334 [2024-12-06 13:16:22.238622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.592 [2024-12-06 13:16:22.469650] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.592 [2024-12-06 13:16:22.469697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.160 malloc1 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.160 [2024-12-06 13:16:23.098171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:36.160 [2024-12-06 13:16:23.098469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.160 [2024-12-06 13:16:23.098663] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:36.160 [2024-12-06 13:16:23.098791] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.160 [2024-12-06 13:16:23.102356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.160 [2024-12-06 13:16:23.102580] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:36.160 pt1 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.160 malloc2 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.160 [2024-12-06 13:16:23.159684] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:36.160 [2024-12-06 13:16:23.159926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.160 [2024-12-06 13:16:23.160078] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:36.160 [2024-12-06 13:16:23.160203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.160 [2024-12-06 13:16:23.163423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.160 [2024-12-06 13:16:23.163627] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:36.160 pt2 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.160 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.419 malloc3 00:21:36.419 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.419 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:36.419 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.419 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.419 [2024-12-06 13:16:23.232578] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:36.419 [2024-12-06 13:16:23.232705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.419 [2024-12-06 13:16:23.232743] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:36.419 [2024-12-06 13:16:23.232759] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.419 [2024-12-06 13:16:23.236151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.419 [2024-12-06 13:16:23.236195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:36.419 pt3 00:21:36.419 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.419 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:36.419 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:36.419 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:21:36.419 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.420 [2024-12-06 13:16:23.244629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:36.420 [2024-12-06 13:16:23.247530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:36.420 [2024-12-06 13:16:23.247800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:36.420 [2024-12-06 13:16:23.248118] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:36.420 [2024-12-06 13:16:23.248169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:36.420 [2024-12-06 13:16:23.248495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:36.420 [2024-12-06 13:16:23.254238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:36.420 [2024-12-06 13:16:23.254262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:36.420 [2024-12-06 13:16:23.254594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.420 "name": "raid_bdev1", 00:21:36.420 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:36.420 "strip_size_kb": 64, 00:21:36.420 "state": "online", 00:21:36.420 "raid_level": "raid5f", 00:21:36.420 "superblock": true, 00:21:36.420 "num_base_bdevs": 3, 00:21:36.420 "num_base_bdevs_discovered": 3, 00:21:36.420 "num_base_bdevs_operational": 3, 00:21:36.420 "base_bdevs_list": [ 00:21:36.420 { 00:21:36.420 "name": "pt1", 00:21:36.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:36.420 "is_configured": true, 00:21:36.420 "data_offset": 2048, 00:21:36.420 "data_size": 63488 00:21:36.420 }, 00:21:36.420 { 00:21:36.420 "name": "pt2", 00:21:36.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:36.420 "is_configured": true, 00:21:36.420 "data_offset": 2048, 00:21:36.420 "data_size": 63488 00:21:36.420 }, 00:21:36.420 { 00:21:36.420 "name": "pt3", 00:21:36.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:36.420 "is_configured": true, 00:21:36.420 "data_offset": 2048, 00:21:36.420 "data_size": 63488 00:21:36.420 } 00:21:36.420 ] 00:21:36.420 }' 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.420 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:37.064 [2024-12-06 13:16:23.801762] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:37.064 "name": "raid_bdev1", 00:21:37.064 "aliases": [ 00:21:37.064 "3c211c7d-12c6-4044-b1cf-9304928991c6" 00:21:37.064 ], 00:21:37.064 "product_name": "Raid Volume", 00:21:37.064 "block_size": 512, 00:21:37.064 "num_blocks": 126976, 00:21:37.064 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:37.064 "assigned_rate_limits": { 00:21:37.064 "rw_ios_per_sec": 0, 00:21:37.064 "rw_mbytes_per_sec": 0, 00:21:37.064 "r_mbytes_per_sec": 0, 00:21:37.064 "w_mbytes_per_sec": 0 00:21:37.064 }, 00:21:37.064 "claimed": false, 00:21:37.064 "zoned": false, 00:21:37.064 "supported_io_types": { 00:21:37.064 "read": true, 00:21:37.064 "write": true, 00:21:37.064 "unmap": false, 00:21:37.064 "flush": false, 00:21:37.064 "reset": true, 00:21:37.064 "nvme_admin": false, 00:21:37.064 "nvme_io": false, 00:21:37.064 "nvme_io_md": false, 00:21:37.064 "write_zeroes": true, 00:21:37.064 "zcopy": false, 00:21:37.064 "get_zone_info": false, 00:21:37.064 "zone_management": false, 00:21:37.064 "zone_append": false, 00:21:37.064 "compare": false, 00:21:37.064 "compare_and_write": false, 00:21:37.064 "abort": false, 00:21:37.064 "seek_hole": false, 00:21:37.064 "seek_data": false, 00:21:37.064 "copy": false, 00:21:37.064 "nvme_iov_md": false 00:21:37.064 }, 00:21:37.064 "driver_specific": { 00:21:37.064 "raid": { 00:21:37.064 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:37.064 "strip_size_kb": 64, 00:21:37.064 "state": "online", 00:21:37.064 "raid_level": "raid5f", 00:21:37.064 "superblock": true, 00:21:37.064 "num_base_bdevs": 3, 00:21:37.064 "num_base_bdevs_discovered": 3, 00:21:37.064 "num_base_bdevs_operational": 3, 00:21:37.064 "base_bdevs_list": [ 00:21:37.064 { 00:21:37.064 "name": "pt1", 00:21:37.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:37.064 "is_configured": true, 00:21:37.064 "data_offset": 2048, 00:21:37.064 "data_size": 63488 00:21:37.064 }, 00:21:37.064 { 00:21:37.064 "name": "pt2", 00:21:37.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:37.064 "is_configured": true, 00:21:37.064 "data_offset": 2048, 00:21:37.064 "data_size": 63488 00:21:37.064 }, 00:21:37.064 { 00:21:37.064 "name": "pt3", 00:21:37.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:37.064 "is_configured": true, 00:21:37.064 "data_offset": 2048, 00:21:37.064 "data_size": 63488 00:21:37.064 } 00:21:37.064 ] 00:21:37.064 } 00:21:37.064 } 00:21:37.064 }' 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:37.064 pt2 00:21:37.064 pt3' 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.064 13:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.064 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:37.064 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:37.064 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:37.065 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:37.065 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.065 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.065 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:37.065 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:37.323 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.323 [2024-12-06 13:16:24.141736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3c211c7d-12c6-4044-b1cf-9304928991c6 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3c211c7d-12c6-4044-b1cf-9304928991c6 ']' 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.324 [2024-12-06 13:16:24.189516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.324 [2024-12-06 13:16:24.189552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.324 [2024-12-06 13:16:24.189815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.324 [2024-12-06 13:16:24.189930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.324 [2024-12-06 13:16:24.189949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.324 [2024-12-06 13:16:24.329613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:37.324 [2024-12-06 13:16:24.332372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:37.324 [2024-12-06 13:16:24.332454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:37.324 [2024-12-06 13:16:24.332560] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:37.324 [2024-12-06 13:16:24.332637] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:37.324 [2024-12-06 13:16:24.332671] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:37.324 [2024-12-06 13:16:24.332700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:37.324 [2024-12-06 13:16:24.332715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:37.324 request: 00:21:37.324 { 00:21:37.324 "name": "raid_bdev1", 00:21:37.324 "raid_level": "raid5f", 00:21:37.324 "base_bdevs": [ 00:21:37.324 "malloc1", 00:21:37.324 "malloc2", 00:21:37.324 "malloc3" 00:21:37.324 ], 00:21:37.324 "strip_size_kb": 64, 00:21:37.324 "superblock": false, 00:21:37.324 "method": "bdev_raid_create", 00:21:37.324 "req_id": 1 00:21:37.324 } 00:21:37.324 Got JSON-RPC error response 00:21:37.324 response: 00:21:37.324 { 00:21:37.324 "code": -17, 00:21:37.324 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:37.324 } 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.324 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.583 [2024-12-06 13:16:24.393565] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:37.583 [2024-12-06 13:16:24.393633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.583 [2024-12-06 13:16:24.393666] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:37.583 [2024-12-06 13:16:24.393682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.583 [2024-12-06 13:16:24.396844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.583 [2024-12-06 13:16:24.396920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:37.583 [2024-12-06 13:16:24.397055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:37.583 [2024-12-06 13:16:24.397123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:37.583 pt1 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.583 "name": "raid_bdev1", 00:21:37.583 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:37.583 "strip_size_kb": 64, 00:21:37.583 "state": "configuring", 00:21:37.583 "raid_level": "raid5f", 00:21:37.583 "superblock": true, 00:21:37.583 "num_base_bdevs": 3, 00:21:37.583 "num_base_bdevs_discovered": 1, 00:21:37.583 "num_base_bdevs_operational": 3, 00:21:37.583 "base_bdevs_list": [ 00:21:37.583 { 00:21:37.583 "name": "pt1", 00:21:37.583 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:37.583 "is_configured": true, 00:21:37.583 "data_offset": 2048, 00:21:37.583 "data_size": 63488 00:21:37.583 }, 00:21:37.583 { 00:21:37.583 "name": null, 00:21:37.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:37.583 "is_configured": false, 00:21:37.583 "data_offset": 2048, 00:21:37.583 "data_size": 63488 00:21:37.583 }, 00:21:37.583 { 00:21:37.583 "name": null, 00:21:37.583 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:37.583 "is_configured": false, 00:21:37.583 "data_offset": 2048, 00:21:37.583 "data_size": 63488 00:21:37.583 } 00:21:37.583 ] 00:21:37.583 }' 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.583 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.150 [2024-12-06 13:16:24.981843] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:38.150 [2024-12-06 13:16:24.981981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.150 [2024-12-06 13:16:24.982020] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:38.150 [2024-12-06 13:16:24.982036] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.150 [2024-12-06 13:16:24.982707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.150 [2024-12-06 13:16:24.982759] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:38.150 [2024-12-06 13:16:24.982904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:38.150 [2024-12-06 13:16:24.982948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:38.150 pt2 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.150 [2024-12-06 13:16:24.993832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.150 13:16:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.150 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.150 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.150 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.150 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.150 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.150 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.150 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.150 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.150 "name": "raid_bdev1", 00:21:38.151 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:38.151 "strip_size_kb": 64, 00:21:38.151 "state": "configuring", 00:21:38.151 "raid_level": "raid5f", 00:21:38.151 "superblock": true, 00:21:38.151 "num_base_bdevs": 3, 00:21:38.151 "num_base_bdevs_discovered": 1, 00:21:38.151 "num_base_bdevs_operational": 3, 00:21:38.151 "base_bdevs_list": [ 00:21:38.151 { 00:21:38.151 "name": "pt1", 00:21:38.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:38.151 "is_configured": true, 00:21:38.151 "data_offset": 2048, 00:21:38.151 "data_size": 63488 00:21:38.151 }, 00:21:38.151 { 00:21:38.151 "name": null, 00:21:38.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:38.151 "is_configured": false, 00:21:38.151 "data_offset": 0, 00:21:38.151 "data_size": 63488 00:21:38.151 }, 00:21:38.151 { 00:21:38.151 "name": null, 00:21:38.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:38.151 "is_configured": false, 00:21:38.151 "data_offset": 2048, 00:21:38.151 "data_size": 63488 00:21:38.151 } 00:21:38.151 ] 00:21:38.151 }' 00:21:38.151 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.151 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.717 [2024-12-06 13:16:25.498006] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:38.717 [2024-12-06 13:16:25.498149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.717 [2024-12-06 13:16:25.498196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:38.717 [2024-12-06 13:16:25.498216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.717 [2024-12-06 13:16:25.498895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.717 [2024-12-06 13:16:25.498949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:38.717 [2024-12-06 13:16:25.499064] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:38.717 [2024-12-06 13:16:25.499117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:38.717 pt2 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.717 [2024-12-06 13:16:25.505938] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:38.717 [2024-12-06 13:16:25.506051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.717 [2024-12-06 13:16:25.506071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:38.717 [2024-12-06 13:16:25.506087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.717 [2024-12-06 13:16:25.506576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.717 [2024-12-06 13:16:25.506625] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:38.717 [2024-12-06 13:16:25.506699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:38.717 [2024-12-06 13:16:25.506732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:38.717 [2024-12-06 13:16:25.506902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:38.717 [2024-12-06 13:16:25.506932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:38.717 [2024-12-06 13:16:25.507294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:38.717 [2024-12-06 13:16:25.512600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:38.717 [2024-12-06 13:16:25.512627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:38.717 [2024-12-06 13:16:25.512846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.717 pt3 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.717 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.718 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.718 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.718 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.718 "name": "raid_bdev1", 00:21:38.718 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:38.718 "strip_size_kb": 64, 00:21:38.718 "state": "online", 00:21:38.718 "raid_level": "raid5f", 00:21:38.718 "superblock": true, 00:21:38.718 "num_base_bdevs": 3, 00:21:38.718 "num_base_bdevs_discovered": 3, 00:21:38.718 "num_base_bdevs_operational": 3, 00:21:38.718 "base_bdevs_list": [ 00:21:38.718 { 00:21:38.718 "name": "pt1", 00:21:38.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:38.718 "is_configured": true, 00:21:38.718 "data_offset": 2048, 00:21:38.718 "data_size": 63488 00:21:38.718 }, 00:21:38.718 { 00:21:38.718 "name": "pt2", 00:21:38.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:38.718 "is_configured": true, 00:21:38.718 "data_offset": 2048, 00:21:38.718 "data_size": 63488 00:21:38.718 }, 00:21:38.718 { 00:21:38.718 "name": "pt3", 00:21:38.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:38.718 "is_configured": true, 00:21:38.718 "data_offset": 2048, 00:21:38.718 "data_size": 63488 00:21:38.718 } 00:21:38.718 ] 00:21:38.718 }' 00:21:38.718 13:16:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.718 13:16:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:39.295 [2024-12-06 13:16:26.039743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.295 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:39.295 "name": "raid_bdev1", 00:21:39.295 "aliases": [ 00:21:39.295 "3c211c7d-12c6-4044-b1cf-9304928991c6" 00:21:39.295 ], 00:21:39.295 "product_name": "Raid Volume", 00:21:39.295 "block_size": 512, 00:21:39.295 "num_blocks": 126976, 00:21:39.295 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:39.295 "assigned_rate_limits": { 00:21:39.295 "rw_ios_per_sec": 0, 00:21:39.295 "rw_mbytes_per_sec": 0, 00:21:39.295 "r_mbytes_per_sec": 0, 00:21:39.295 "w_mbytes_per_sec": 0 00:21:39.295 }, 00:21:39.295 "claimed": false, 00:21:39.295 "zoned": false, 00:21:39.295 "supported_io_types": { 00:21:39.295 "read": true, 00:21:39.295 "write": true, 00:21:39.295 "unmap": false, 00:21:39.295 "flush": false, 00:21:39.295 "reset": true, 00:21:39.295 "nvme_admin": false, 00:21:39.295 "nvme_io": false, 00:21:39.295 "nvme_io_md": false, 00:21:39.295 "write_zeroes": true, 00:21:39.295 "zcopy": false, 00:21:39.295 "get_zone_info": false, 00:21:39.295 "zone_management": false, 00:21:39.295 "zone_append": false, 00:21:39.295 "compare": false, 00:21:39.295 "compare_and_write": false, 00:21:39.295 "abort": false, 00:21:39.295 "seek_hole": false, 00:21:39.295 "seek_data": false, 00:21:39.295 "copy": false, 00:21:39.295 "nvme_iov_md": false 00:21:39.295 }, 00:21:39.295 "driver_specific": { 00:21:39.295 "raid": { 00:21:39.295 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:39.295 "strip_size_kb": 64, 00:21:39.295 "state": "online", 00:21:39.295 "raid_level": "raid5f", 00:21:39.295 "superblock": true, 00:21:39.295 "num_base_bdevs": 3, 00:21:39.295 "num_base_bdevs_discovered": 3, 00:21:39.296 "num_base_bdevs_operational": 3, 00:21:39.296 "base_bdevs_list": [ 00:21:39.296 { 00:21:39.296 "name": "pt1", 00:21:39.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.296 "is_configured": true, 00:21:39.296 "data_offset": 2048, 00:21:39.296 "data_size": 63488 00:21:39.296 }, 00:21:39.296 { 00:21:39.296 "name": "pt2", 00:21:39.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.296 "is_configured": true, 00:21:39.296 "data_offset": 2048, 00:21:39.296 "data_size": 63488 00:21:39.296 }, 00:21:39.296 { 00:21:39.296 "name": "pt3", 00:21:39.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:39.296 "is_configured": true, 00:21:39.296 "data_offset": 2048, 00:21:39.296 "data_size": 63488 00:21:39.296 } 00:21:39.296 ] 00:21:39.296 } 00:21:39.296 } 00:21:39.296 }' 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:39.296 pt2 00:21:39.296 pt3' 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.296 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.554 [2024-12-06 13:16:26.359556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3c211c7d-12c6-4044-b1cf-9304928991c6 '!=' 3c211c7d-12c6-4044-b1cf-9304928991c6 ']' 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.554 [2024-12-06 13:16:26.407377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.554 "name": "raid_bdev1", 00:21:39.554 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:39.554 "strip_size_kb": 64, 00:21:39.554 "state": "online", 00:21:39.554 "raid_level": "raid5f", 00:21:39.554 "superblock": true, 00:21:39.554 "num_base_bdevs": 3, 00:21:39.554 "num_base_bdevs_discovered": 2, 00:21:39.554 "num_base_bdevs_operational": 2, 00:21:39.554 "base_bdevs_list": [ 00:21:39.554 { 00:21:39.554 "name": null, 00:21:39.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.554 "is_configured": false, 00:21:39.554 "data_offset": 0, 00:21:39.554 "data_size": 63488 00:21:39.554 }, 00:21:39.554 { 00:21:39.554 "name": "pt2", 00:21:39.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.554 "is_configured": true, 00:21:39.554 "data_offset": 2048, 00:21:39.554 "data_size": 63488 00:21:39.554 }, 00:21:39.554 { 00:21:39.554 "name": "pt3", 00:21:39.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:39.554 "is_configured": true, 00:21:39.554 "data_offset": 2048, 00:21:39.554 "data_size": 63488 00:21:39.554 } 00:21:39.554 ] 00:21:39.554 }' 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.554 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.120 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:40.120 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.120 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.120 [2024-12-06 13:16:26.967487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.120 [2024-12-06 13:16:26.967554] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.120 [2024-12-06 13:16:26.967678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.120 [2024-12-06 13:16:26.967770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.120 [2024-12-06 13:16:26.967794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:40.120 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.120 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.120 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.120 13:16:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:40.120 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.120 13:16:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.120 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.120 [2024-12-06 13:16:27.047370] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:40.120 [2024-12-06 13:16:27.047517] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.120 [2024-12-06 13:16:27.047551] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:40.120 [2024-12-06 13:16:27.047586] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.121 [2024-12-06 13:16:27.050810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.121 [2024-12-06 13:16:27.051145] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:40.121 [2024-12-06 13:16:27.051297] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:40.121 [2024-12-06 13:16:27.051382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:40.121 pt2 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.121 "name": "raid_bdev1", 00:21:40.121 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:40.121 "strip_size_kb": 64, 00:21:40.121 "state": "configuring", 00:21:40.121 "raid_level": "raid5f", 00:21:40.121 "superblock": true, 00:21:40.121 "num_base_bdevs": 3, 00:21:40.121 "num_base_bdevs_discovered": 1, 00:21:40.121 "num_base_bdevs_operational": 2, 00:21:40.121 "base_bdevs_list": [ 00:21:40.121 { 00:21:40.121 "name": null, 00:21:40.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.121 "is_configured": false, 00:21:40.121 "data_offset": 2048, 00:21:40.121 "data_size": 63488 00:21:40.121 }, 00:21:40.121 { 00:21:40.121 "name": "pt2", 00:21:40.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.121 "is_configured": true, 00:21:40.121 "data_offset": 2048, 00:21:40.121 "data_size": 63488 00:21:40.121 }, 00:21:40.121 { 00:21:40.121 "name": null, 00:21:40.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:40.121 "is_configured": false, 00:21:40.121 "data_offset": 2048, 00:21:40.121 "data_size": 63488 00:21:40.121 } 00:21:40.121 ] 00:21:40.121 }' 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.121 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.687 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:40.687 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:40.687 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:21:40.687 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:40.687 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.687 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.687 [2024-12-06 13:16:27.595847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:40.687 [2024-12-06 13:16:27.595990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.687 [2024-12-06 13:16:27.596029] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:40.687 [2024-12-06 13:16:27.596049] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.687 [2024-12-06 13:16:27.596753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.687 [2024-12-06 13:16:27.596798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:40.687 [2024-12-06 13:16:27.596915] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:40.687 [2024-12-06 13:16:27.596975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:40.687 [2024-12-06 13:16:27.597159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:40.687 [2024-12-06 13:16:27.597189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:40.687 [2024-12-06 13:16:27.597595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:40.687 pt3 00:21:40.687 [2024-12-06 13:16:27.602979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:40.687 [2024-12-06 13:16:27.603013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:40.687 [2024-12-06 13:16:27.603381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.687 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.687 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:40.687 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.687 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.687 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.688 "name": "raid_bdev1", 00:21:40.688 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:40.688 "strip_size_kb": 64, 00:21:40.688 "state": "online", 00:21:40.688 "raid_level": "raid5f", 00:21:40.688 "superblock": true, 00:21:40.688 "num_base_bdevs": 3, 00:21:40.688 "num_base_bdevs_discovered": 2, 00:21:40.688 "num_base_bdevs_operational": 2, 00:21:40.688 "base_bdevs_list": [ 00:21:40.688 { 00:21:40.688 "name": null, 00:21:40.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.688 "is_configured": false, 00:21:40.688 "data_offset": 2048, 00:21:40.688 "data_size": 63488 00:21:40.688 }, 00:21:40.688 { 00:21:40.688 "name": "pt2", 00:21:40.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.688 "is_configured": true, 00:21:40.688 "data_offset": 2048, 00:21:40.688 "data_size": 63488 00:21:40.688 }, 00:21:40.688 { 00:21:40.688 "name": "pt3", 00:21:40.688 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:40.688 "is_configured": true, 00:21:40.688 "data_offset": 2048, 00:21:40.688 "data_size": 63488 00:21:40.688 } 00:21:40.688 ] 00:21:40.688 }' 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.688 13:16:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.255 [2024-12-06 13:16:28.073803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.255 [2024-12-06 13:16:28.074166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:41.255 [2024-12-06 13:16:28.074312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:41.255 [2024-12-06 13:16:28.074417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:41.255 [2024-12-06 13:16:28.074435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.255 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.256 [2024-12-06 13:16:28.145823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:41.256 [2024-12-06 13:16:28.145929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.256 [2024-12-06 13:16:28.145962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:41.256 [2024-12-06 13:16:28.145978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.256 [2024-12-06 13:16:28.149168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.256 [2024-12-06 13:16:28.149216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:41.256 [2024-12-06 13:16:28.149334] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:41.256 [2024-12-06 13:16:28.149399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:41.256 [2024-12-06 13:16:28.149608] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:41.256 [2024-12-06 13:16:28.149628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.256 [2024-12-06 13:16:28.149654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:41.256 [2024-12-06 13:16:28.149723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:41.256 pt1 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.256 "name": "raid_bdev1", 00:21:41.256 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:41.256 "strip_size_kb": 64, 00:21:41.256 "state": "configuring", 00:21:41.256 "raid_level": "raid5f", 00:21:41.256 "superblock": true, 00:21:41.256 "num_base_bdevs": 3, 00:21:41.256 "num_base_bdevs_discovered": 1, 00:21:41.256 "num_base_bdevs_operational": 2, 00:21:41.256 "base_bdevs_list": [ 00:21:41.256 { 00:21:41.256 "name": null, 00:21:41.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.256 "is_configured": false, 00:21:41.256 "data_offset": 2048, 00:21:41.256 "data_size": 63488 00:21:41.256 }, 00:21:41.256 { 00:21:41.256 "name": "pt2", 00:21:41.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.256 "is_configured": true, 00:21:41.256 "data_offset": 2048, 00:21:41.256 "data_size": 63488 00:21:41.256 }, 00:21:41.256 { 00:21:41.256 "name": null, 00:21:41.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.256 "is_configured": false, 00:21:41.256 "data_offset": 2048, 00:21:41.256 "data_size": 63488 00:21:41.256 } 00:21:41.256 ] 00:21:41.256 }' 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.256 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.824 [2024-12-06 13:16:28.714086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:41.824 [2024-12-06 13:16:28.714182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.824 [2024-12-06 13:16:28.714217] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:41.824 [2024-12-06 13:16:28.714234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.824 [2024-12-06 13:16:28.714877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.824 [2024-12-06 13:16:28.714919] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:41.824 [2024-12-06 13:16:28.715029] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:41.824 [2024-12-06 13:16:28.715064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:41.824 [2024-12-06 13:16:28.715221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:41.824 [2024-12-06 13:16:28.715247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:41.824 [2024-12-06 13:16:28.715576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:41.824 [2024-12-06 13:16:28.720423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:41.824 [2024-12-06 13:16:28.720462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:41.824 [2024-12-06 13:16:28.720759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.824 pt3 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.824 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.824 "name": "raid_bdev1", 00:21:41.824 "uuid": "3c211c7d-12c6-4044-b1cf-9304928991c6", 00:21:41.824 "strip_size_kb": 64, 00:21:41.824 "state": "online", 00:21:41.824 "raid_level": "raid5f", 00:21:41.824 "superblock": true, 00:21:41.824 "num_base_bdevs": 3, 00:21:41.824 "num_base_bdevs_discovered": 2, 00:21:41.824 "num_base_bdevs_operational": 2, 00:21:41.824 "base_bdevs_list": [ 00:21:41.824 { 00:21:41.824 "name": null, 00:21:41.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.824 "is_configured": false, 00:21:41.824 "data_offset": 2048, 00:21:41.824 "data_size": 63488 00:21:41.824 }, 00:21:41.824 { 00:21:41.824 "name": "pt2", 00:21:41.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.824 "is_configured": true, 00:21:41.824 "data_offset": 2048, 00:21:41.824 "data_size": 63488 00:21:41.824 }, 00:21:41.824 { 00:21:41.824 "name": "pt3", 00:21:41.824 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.824 "is_configured": true, 00:21:41.824 "data_offset": 2048, 00:21:41.825 "data_size": 63488 00:21:41.825 } 00:21:41.825 ] 00:21:41.825 }' 00:21:41.825 13:16:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.825 13:16:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.392 [2024-12-06 13:16:29.238701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3c211c7d-12c6-4044-b1cf-9304928991c6 '!=' 3c211c7d-12c6-4044-b1cf-9304928991c6 ']' 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81845 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81845 ']' 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81845 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81845 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:42.392 killing process with pid 81845 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81845' 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81845 00:21:42.392 [2024-12-06 13:16:29.313995] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:42.392 13:16:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81845 00:21:42.392 [2024-12-06 13:16:29.314119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.392 [2024-12-06 13:16:29.314205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:42.392 [2024-12-06 13:16:29.314225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:42.651 [2024-12-06 13:16:29.590883] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:44.027 13:16:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:44.027 00:21:44.027 real 0m8.877s 00:21:44.027 user 0m14.379s 00:21:44.027 sys 0m1.358s 00:21:44.027 13:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.027 ************************************ 00:21:44.027 END TEST raid5f_superblock_test 00:21:44.027 ************************************ 00:21:44.027 13:16:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.027 13:16:30 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:21:44.027 13:16:30 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:21:44.027 13:16:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:44.027 13:16:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.027 13:16:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:44.027 ************************************ 00:21:44.028 START TEST raid5f_rebuild_test 00:21:44.028 ************************************ 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82299 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82299 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82299 ']' 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.028 13:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.028 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:44.028 Zero copy mechanism will not be used. 00:21:44.028 [2024-12-06 13:16:30.836305] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:21:44.028 [2024-12-06 13:16:30.836485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82299 ] 00:21:44.028 [2024-12-06 13:16:31.013072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.286 [2024-12-06 13:16:31.160527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.545 [2024-12-06 13:16:31.380846] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:44.545 [2024-12-06 13:16:31.380922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:44.804 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.804 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:21:44.804 13:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:44.804 13:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:44.804 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.064 BaseBdev1_malloc 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.064 [2024-12-06 13:16:31.873586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:45.064 [2024-12-06 13:16:31.873673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.064 [2024-12-06 13:16:31.873706] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:45.064 [2024-12-06 13:16:31.873725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.064 [2024-12-06 13:16:31.876681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.064 [2024-12-06 13:16:31.876733] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:45.064 BaseBdev1 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.064 BaseBdev2_malloc 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.064 [2024-12-06 13:16:31.929082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:45.064 [2024-12-06 13:16:31.929166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.064 [2024-12-06 13:16:31.929197] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:45.064 [2024-12-06 13:16:31.929214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.064 [2024-12-06 13:16:31.932098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.064 [2024-12-06 13:16:31.932152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:45.064 BaseBdev2 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.064 BaseBdev3_malloc 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.064 13:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.064 [2024-12-06 13:16:31.995930] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:45.064 [2024-12-06 13:16:31.996171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.064 [2024-12-06 13:16:31.996218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:45.064 [2024-12-06 13:16:31.996240] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.064 [2024-12-06 13:16:31.999223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.064 [2024-12-06 13:16:31.999396] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:45.064 BaseBdev3 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.064 spare_malloc 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.064 spare_delay 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.064 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.064 [2024-12-06 13:16:32.073071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:45.064 [2024-12-06 13:16:32.073186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.064 [2024-12-06 13:16:32.073238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:45.064 [2024-12-06 13:16:32.073263] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.064 [2024-12-06 13:16:32.077366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.064 [2024-12-06 13:16:32.077437] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:45.323 spare 00:21:45.323 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.323 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:21:45.323 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.323 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.323 [2024-12-06 13:16:32.081810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:45.323 [2024-12-06 13:16:32.084871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:45.323 [2024-12-06 13:16:32.084990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:45.324 [2024-12-06 13:16:32.085152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:45.324 [2024-12-06 13:16:32.085174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:45.324 [2024-12-06 13:16:32.085659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:45.324 [2024-12-06 13:16:32.092828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:45.324 [2024-12-06 13:16:32.092876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:45.324 [2024-12-06 13:16:32.093293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.324 "name": "raid_bdev1", 00:21:45.324 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:45.324 "strip_size_kb": 64, 00:21:45.324 "state": "online", 00:21:45.324 "raid_level": "raid5f", 00:21:45.324 "superblock": false, 00:21:45.324 "num_base_bdevs": 3, 00:21:45.324 "num_base_bdevs_discovered": 3, 00:21:45.324 "num_base_bdevs_operational": 3, 00:21:45.324 "base_bdevs_list": [ 00:21:45.324 { 00:21:45.324 "name": "BaseBdev1", 00:21:45.324 "uuid": "41c7852a-7856-55bb-852b-8be9c463b604", 00:21:45.324 "is_configured": true, 00:21:45.324 "data_offset": 0, 00:21:45.324 "data_size": 65536 00:21:45.324 }, 00:21:45.324 { 00:21:45.324 "name": "BaseBdev2", 00:21:45.324 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:45.324 "is_configured": true, 00:21:45.324 "data_offset": 0, 00:21:45.324 "data_size": 65536 00:21:45.324 }, 00:21:45.324 { 00:21:45.324 "name": "BaseBdev3", 00:21:45.324 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:45.324 "is_configured": true, 00:21:45.324 "data_offset": 0, 00:21:45.324 "data_size": 65536 00:21:45.324 } 00:21:45.324 ] 00:21:45.324 }' 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.324 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.890 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:45.890 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.890 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.890 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:45.890 [2024-12-06 13:16:32.624136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:45.890 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.890 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:21:45.890 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:45.891 13:16:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:46.149 [2024-12-06 13:16:33.024091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:46.149 /dev/nbd0 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:46.149 1+0 records in 00:21:46.149 1+0 records out 00:21:46.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437925 s, 9.4 MB/s 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:21:46.149 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:21:47.085 512+0 records in 00:21:47.085 512+0 records out 00:21:47.085 67108864 bytes (67 MB, 64 MiB) copied, 0.646528 s, 104 MB/s 00:21:47.085 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:47.085 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:47.085 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:47.085 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:47.085 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:47.085 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:47.085 13:16:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:47.085 [2024-12-06 13:16:34.017343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.085 [2024-12-06 13:16:34.027996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.085 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:47.085 "name": "raid_bdev1", 00:21:47.085 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:47.085 "strip_size_kb": 64, 00:21:47.085 "state": "online", 00:21:47.085 "raid_level": "raid5f", 00:21:47.085 "superblock": false, 00:21:47.085 "num_base_bdevs": 3, 00:21:47.085 "num_base_bdevs_discovered": 2, 00:21:47.085 "num_base_bdevs_operational": 2, 00:21:47.085 "base_bdevs_list": [ 00:21:47.085 { 00:21:47.086 "name": null, 00:21:47.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.086 "is_configured": false, 00:21:47.086 "data_offset": 0, 00:21:47.086 "data_size": 65536 00:21:47.086 }, 00:21:47.086 { 00:21:47.086 "name": "BaseBdev2", 00:21:47.086 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:47.086 "is_configured": true, 00:21:47.086 "data_offset": 0, 00:21:47.086 "data_size": 65536 00:21:47.086 }, 00:21:47.086 { 00:21:47.086 "name": "BaseBdev3", 00:21:47.086 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:47.086 "is_configured": true, 00:21:47.086 "data_offset": 0, 00:21:47.086 "data_size": 65536 00:21:47.086 } 00:21:47.086 ] 00:21:47.086 }' 00:21:47.086 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:47.086 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.795 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:47.795 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.795 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.795 [2024-12-06 13:16:34.556157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:47.795 [2024-12-06 13:16:34.573404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:21:47.795 13:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.795 13:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:47.795 [2024-12-06 13:16:34.581686] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:48.729 "name": "raid_bdev1", 00:21:48.729 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:48.729 "strip_size_kb": 64, 00:21:48.729 "state": "online", 00:21:48.729 "raid_level": "raid5f", 00:21:48.729 "superblock": false, 00:21:48.729 "num_base_bdevs": 3, 00:21:48.729 "num_base_bdevs_discovered": 3, 00:21:48.729 "num_base_bdevs_operational": 3, 00:21:48.729 "process": { 00:21:48.729 "type": "rebuild", 00:21:48.729 "target": "spare", 00:21:48.729 "progress": { 00:21:48.729 "blocks": 18432, 00:21:48.729 "percent": 14 00:21:48.729 } 00:21:48.729 }, 00:21:48.729 "base_bdevs_list": [ 00:21:48.729 { 00:21:48.729 "name": "spare", 00:21:48.729 "uuid": "a7806144-68b9-5668-9e8e-ff6da448ef5b", 00:21:48.729 "is_configured": true, 00:21:48.729 "data_offset": 0, 00:21:48.729 "data_size": 65536 00:21:48.729 }, 00:21:48.729 { 00:21:48.729 "name": "BaseBdev2", 00:21:48.729 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:48.729 "is_configured": true, 00:21:48.729 "data_offset": 0, 00:21:48.729 "data_size": 65536 00:21:48.729 }, 00:21:48.729 { 00:21:48.729 "name": "BaseBdev3", 00:21:48.729 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:48.729 "is_configured": true, 00:21:48.729 "data_offset": 0, 00:21:48.729 "data_size": 65536 00:21:48.729 } 00:21:48.729 ] 00:21:48.729 }' 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.729 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.988 [2024-12-06 13:16:35.760743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:48.988 [2024-12-06 13:16:35.801162] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:48.988 [2024-12-06 13:16:35.801298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.988 [2024-12-06 13:16:35.801331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:48.988 [2024-12-06 13:16:35.801345] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.988 "name": "raid_bdev1", 00:21:48.988 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:48.988 "strip_size_kb": 64, 00:21:48.988 "state": "online", 00:21:48.988 "raid_level": "raid5f", 00:21:48.988 "superblock": false, 00:21:48.988 "num_base_bdevs": 3, 00:21:48.988 "num_base_bdevs_discovered": 2, 00:21:48.988 "num_base_bdevs_operational": 2, 00:21:48.988 "base_bdevs_list": [ 00:21:48.988 { 00:21:48.988 "name": null, 00:21:48.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.988 "is_configured": false, 00:21:48.988 "data_offset": 0, 00:21:48.988 "data_size": 65536 00:21:48.988 }, 00:21:48.988 { 00:21:48.988 "name": "BaseBdev2", 00:21:48.988 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:48.988 "is_configured": true, 00:21:48.988 "data_offset": 0, 00:21:48.988 "data_size": 65536 00:21:48.988 }, 00:21:48.988 { 00:21:48.988 "name": "BaseBdev3", 00:21:48.988 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:48.988 "is_configured": true, 00:21:48.988 "data_offset": 0, 00:21:48.988 "data_size": 65536 00:21:48.988 } 00:21:48.988 ] 00:21:48.988 }' 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.988 13:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.553 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:49.553 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:49.553 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:49.553 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:49.553 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:49.553 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.553 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.553 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.553 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.553 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.553 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:49.553 "name": "raid_bdev1", 00:21:49.553 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:49.553 "strip_size_kb": 64, 00:21:49.553 "state": "online", 00:21:49.553 "raid_level": "raid5f", 00:21:49.553 "superblock": false, 00:21:49.553 "num_base_bdevs": 3, 00:21:49.553 "num_base_bdevs_discovered": 2, 00:21:49.553 "num_base_bdevs_operational": 2, 00:21:49.553 "base_bdevs_list": [ 00:21:49.553 { 00:21:49.553 "name": null, 00:21:49.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.553 "is_configured": false, 00:21:49.553 "data_offset": 0, 00:21:49.553 "data_size": 65536 00:21:49.553 }, 00:21:49.553 { 00:21:49.553 "name": "BaseBdev2", 00:21:49.553 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:49.553 "is_configured": true, 00:21:49.553 "data_offset": 0, 00:21:49.553 "data_size": 65536 00:21:49.553 }, 00:21:49.553 { 00:21:49.553 "name": "BaseBdev3", 00:21:49.553 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:49.554 "is_configured": true, 00:21:49.554 "data_offset": 0, 00:21:49.554 "data_size": 65536 00:21:49.554 } 00:21:49.554 ] 00:21:49.554 }' 00:21:49.554 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:49.554 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:49.554 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:49.554 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:49.554 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:49.554 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.554 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.554 [2024-12-06 13:16:36.536448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:49.554 [2024-12-06 13:16:36.552308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:21:49.554 13:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.554 13:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:49.554 [2024-12-06 13:16:36.560089] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:50.928 "name": "raid_bdev1", 00:21:50.928 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:50.928 "strip_size_kb": 64, 00:21:50.928 "state": "online", 00:21:50.928 "raid_level": "raid5f", 00:21:50.928 "superblock": false, 00:21:50.928 "num_base_bdevs": 3, 00:21:50.928 "num_base_bdevs_discovered": 3, 00:21:50.928 "num_base_bdevs_operational": 3, 00:21:50.928 "process": { 00:21:50.928 "type": "rebuild", 00:21:50.928 "target": "spare", 00:21:50.928 "progress": { 00:21:50.928 "blocks": 18432, 00:21:50.928 "percent": 14 00:21:50.928 } 00:21:50.928 }, 00:21:50.928 "base_bdevs_list": [ 00:21:50.928 { 00:21:50.928 "name": "spare", 00:21:50.928 "uuid": "a7806144-68b9-5668-9e8e-ff6da448ef5b", 00:21:50.928 "is_configured": true, 00:21:50.928 "data_offset": 0, 00:21:50.928 "data_size": 65536 00:21:50.928 }, 00:21:50.928 { 00:21:50.928 "name": "BaseBdev2", 00:21:50.928 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:50.928 "is_configured": true, 00:21:50.928 "data_offset": 0, 00:21:50.928 "data_size": 65536 00:21:50.928 }, 00:21:50.928 { 00:21:50.928 "name": "BaseBdev3", 00:21:50.928 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:50.928 "is_configured": true, 00:21:50.928 "data_offset": 0, 00:21:50.928 "data_size": 65536 00:21:50.928 } 00:21:50.928 ] 00:21:50.928 }' 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=608 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.928 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:50.928 "name": "raid_bdev1", 00:21:50.929 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:50.929 "strip_size_kb": 64, 00:21:50.929 "state": "online", 00:21:50.929 "raid_level": "raid5f", 00:21:50.929 "superblock": false, 00:21:50.929 "num_base_bdevs": 3, 00:21:50.929 "num_base_bdevs_discovered": 3, 00:21:50.929 "num_base_bdevs_operational": 3, 00:21:50.929 "process": { 00:21:50.929 "type": "rebuild", 00:21:50.929 "target": "spare", 00:21:50.929 "progress": { 00:21:50.929 "blocks": 22528, 00:21:50.929 "percent": 17 00:21:50.929 } 00:21:50.929 }, 00:21:50.929 "base_bdevs_list": [ 00:21:50.929 { 00:21:50.929 "name": "spare", 00:21:50.929 "uuid": "a7806144-68b9-5668-9e8e-ff6da448ef5b", 00:21:50.929 "is_configured": true, 00:21:50.929 "data_offset": 0, 00:21:50.929 "data_size": 65536 00:21:50.929 }, 00:21:50.929 { 00:21:50.929 "name": "BaseBdev2", 00:21:50.929 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:50.929 "is_configured": true, 00:21:50.929 "data_offset": 0, 00:21:50.929 "data_size": 65536 00:21:50.929 }, 00:21:50.929 { 00:21:50.929 "name": "BaseBdev3", 00:21:50.929 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:50.929 "is_configured": true, 00:21:50.929 "data_offset": 0, 00:21:50.929 "data_size": 65536 00:21:50.929 } 00:21:50.929 ] 00:21:50.929 }' 00:21:50.929 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:50.929 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.929 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:50.929 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.929 13:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.362 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.362 "name": "raid_bdev1", 00:21:52.362 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:52.362 "strip_size_kb": 64, 00:21:52.362 "state": "online", 00:21:52.362 "raid_level": "raid5f", 00:21:52.362 "superblock": false, 00:21:52.362 "num_base_bdevs": 3, 00:21:52.363 "num_base_bdevs_discovered": 3, 00:21:52.363 "num_base_bdevs_operational": 3, 00:21:52.363 "process": { 00:21:52.363 "type": "rebuild", 00:21:52.363 "target": "spare", 00:21:52.363 "progress": { 00:21:52.363 "blocks": 47104, 00:21:52.363 "percent": 35 00:21:52.363 } 00:21:52.363 }, 00:21:52.363 "base_bdevs_list": [ 00:21:52.363 { 00:21:52.363 "name": "spare", 00:21:52.363 "uuid": "a7806144-68b9-5668-9e8e-ff6da448ef5b", 00:21:52.363 "is_configured": true, 00:21:52.363 "data_offset": 0, 00:21:52.363 "data_size": 65536 00:21:52.363 }, 00:21:52.363 { 00:21:52.363 "name": "BaseBdev2", 00:21:52.363 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:52.363 "is_configured": true, 00:21:52.363 "data_offset": 0, 00:21:52.363 "data_size": 65536 00:21:52.363 }, 00:21:52.363 { 00:21:52.363 "name": "BaseBdev3", 00:21:52.363 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:52.363 "is_configured": true, 00:21:52.363 "data_offset": 0, 00:21:52.363 "data_size": 65536 00:21:52.363 } 00:21:52.363 ] 00:21:52.363 }' 00:21:52.363 13:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.363 13:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.363 13:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.363 13:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.363 13:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.339 "name": "raid_bdev1", 00:21:53.339 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:53.339 "strip_size_kb": 64, 00:21:53.339 "state": "online", 00:21:53.339 "raid_level": "raid5f", 00:21:53.339 "superblock": false, 00:21:53.339 "num_base_bdevs": 3, 00:21:53.339 "num_base_bdevs_discovered": 3, 00:21:53.339 "num_base_bdevs_operational": 3, 00:21:53.339 "process": { 00:21:53.339 "type": "rebuild", 00:21:53.339 "target": "spare", 00:21:53.339 "progress": { 00:21:53.339 "blocks": 69632, 00:21:53.339 "percent": 53 00:21:53.339 } 00:21:53.339 }, 00:21:53.339 "base_bdevs_list": [ 00:21:53.339 { 00:21:53.339 "name": "spare", 00:21:53.339 "uuid": "a7806144-68b9-5668-9e8e-ff6da448ef5b", 00:21:53.339 "is_configured": true, 00:21:53.339 "data_offset": 0, 00:21:53.339 "data_size": 65536 00:21:53.339 }, 00:21:53.339 { 00:21:53.339 "name": "BaseBdev2", 00:21:53.339 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:53.339 "is_configured": true, 00:21:53.339 "data_offset": 0, 00:21:53.339 "data_size": 65536 00:21:53.339 }, 00:21:53.339 { 00:21:53.339 "name": "BaseBdev3", 00:21:53.339 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:53.339 "is_configured": true, 00:21:53.339 "data_offset": 0, 00:21:53.339 "data_size": 65536 00:21:53.339 } 00:21:53.339 ] 00:21:53.339 }' 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.339 13:16:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.271 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.271 "name": "raid_bdev1", 00:21:54.272 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:54.272 "strip_size_kb": 64, 00:21:54.272 "state": "online", 00:21:54.272 "raid_level": "raid5f", 00:21:54.272 "superblock": false, 00:21:54.272 "num_base_bdevs": 3, 00:21:54.272 "num_base_bdevs_discovered": 3, 00:21:54.272 "num_base_bdevs_operational": 3, 00:21:54.272 "process": { 00:21:54.272 "type": "rebuild", 00:21:54.272 "target": "spare", 00:21:54.272 "progress": { 00:21:54.272 "blocks": 94208, 00:21:54.272 "percent": 71 00:21:54.272 } 00:21:54.272 }, 00:21:54.272 "base_bdevs_list": [ 00:21:54.272 { 00:21:54.272 "name": "spare", 00:21:54.272 "uuid": "a7806144-68b9-5668-9e8e-ff6da448ef5b", 00:21:54.272 "is_configured": true, 00:21:54.272 "data_offset": 0, 00:21:54.272 "data_size": 65536 00:21:54.272 }, 00:21:54.272 { 00:21:54.272 "name": "BaseBdev2", 00:21:54.272 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:54.272 "is_configured": true, 00:21:54.272 "data_offset": 0, 00:21:54.272 "data_size": 65536 00:21:54.272 }, 00:21:54.272 { 00:21:54.272 "name": "BaseBdev3", 00:21:54.272 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:54.272 "is_configured": true, 00:21:54.272 "data_offset": 0, 00:21:54.272 "data_size": 65536 00:21:54.272 } 00:21:54.272 ] 00:21:54.272 }' 00:21:54.272 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.530 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.530 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.530 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.530 13:16:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.466 "name": "raid_bdev1", 00:21:55.466 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:55.466 "strip_size_kb": 64, 00:21:55.466 "state": "online", 00:21:55.466 "raid_level": "raid5f", 00:21:55.466 "superblock": false, 00:21:55.466 "num_base_bdevs": 3, 00:21:55.466 "num_base_bdevs_discovered": 3, 00:21:55.466 "num_base_bdevs_operational": 3, 00:21:55.466 "process": { 00:21:55.466 "type": "rebuild", 00:21:55.466 "target": "spare", 00:21:55.466 "progress": { 00:21:55.466 "blocks": 116736, 00:21:55.466 "percent": 89 00:21:55.466 } 00:21:55.466 }, 00:21:55.466 "base_bdevs_list": [ 00:21:55.466 { 00:21:55.466 "name": "spare", 00:21:55.466 "uuid": "a7806144-68b9-5668-9e8e-ff6da448ef5b", 00:21:55.466 "is_configured": true, 00:21:55.466 "data_offset": 0, 00:21:55.466 "data_size": 65536 00:21:55.466 }, 00:21:55.466 { 00:21:55.466 "name": "BaseBdev2", 00:21:55.466 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:55.466 "is_configured": true, 00:21:55.466 "data_offset": 0, 00:21:55.466 "data_size": 65536 00:21:55.466 }, 00:21:55.466 { 00:21:55.466 "name": "BaseBdev3", 00:21:55.466 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:55.466 "is_configured": true, 00:21:55.466 "data_offset": 0, 00:21:55.466 "data_size": 65536 00:21:55.466 } 00:21:55.466 ] 00:21:55.466 }' 00:21:55.466 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.725 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.725 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.725 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.725 13:16:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:56.294 [2024-12-06 13:16:43.041754] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:56.294 [2024-12-06 13:16:43.041940] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:56.294 [2024-12-06 13:16:43.042002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.553 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:56.553 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.553 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.553 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:56.553 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:56.553 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.553 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.553 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.553 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.553 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.813 "name": "raid_bdev1", 00:21:56.813 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:56.813 "strip_size_kb": 64, 00:21:56.813 "state": "online", 00:21:56.813 "raid_level": "raid5f", 00:21:56.813 "superblock": false, 00:21:56.813 "num_base_bdevs": 3, 00:21:56.813 "num_base_bdevs_discovered": 3, 00:21:56.813 "num_base_bdevs_operational": 3, 00:21:56.813 "base_bdevs_list": [ 00:21:56.813 { 00:21:56.813 "name": "spare", 00:21:56.813 "uuid": "a7806144-68b9-5668-9e8e-ff6da448ef5b", 00:21:56.813 "is_configured": true, 00:21:56.813 "data_offset": 0, 00:21:56.813 "data_size": 65536 00:21:56.813 }, 00:21:56.813 { 00:21:56.813 "name": "BaseBdev2", 00:21:56.813 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:56.813 "is_configured": true, 00:21:56.813 "data_offset": 0, 00:21:56.813 "data_size": 65536 00:21:56.813 }, 00:21:56.813 { 00:21:56.813 "name": "BaseBdev3", 00:21:56.813 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:56.813 "is_configured": true, 00:21:56.813 "data_offset": 0, 00:21:56.813 "data_size": 65536 00:21:56.813 } 00:21:56.813 ] 00:21:56.813 }' 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.813 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.814 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.814 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.814 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.115 "name": "raid_bdev1", 00:21:57.115 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:57.115 "strip_size_kb": 64, 00:21:57.115 "state": "online", 00:21:57.115 "raid_level": "raid5f", 00:21:57.115 "superblock": false, 00:21:57.115 "num_base_bdevs": 3, 00:21:57.115 "num_base_bdevs_discovered": 3, 00:21:57.115 "num_base_bdevs_operational": 3, 00:21:57.115 "base_bdevs_list": [ 00:21:57.115 { 00:21:57.115 "name": "spare", 00:21:57.115 "uuid": "a7806144-68b9-5668-9e8e-ff6da448ef5b", 00:21:57.115 "is_configured": true, 00:21:57.115 "data_offset": 0, 00:21:57.115 "data_size": 65536 00:21:57.115 }, 00:21:57.115 { 00:21:57.115 "name": "BaseBdev2", 00:21:57.115 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:57.115 "is_configured": true, 00:21:57.115 "data_offset": 0, 00:21:57.115 "data_size": 65536 00:21:57.115 }, 00:21:57.115 { 00:21:57.115 "name": "BaseBdev3", 00:21:57.115 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:57.115 "is_configured": true, 00:21:57.115 "data_offset": 0, 00:21:57.115 "data_size": 65536 00:21:57.115 } 00:21:57.115 ] 00:21:57.115 }' 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.115 13:16:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.116 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.116 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.116 "name": "raid_bdev1", 00:21:57.116 "uuid": "c0098b16-e32f-458e-8360-a6b8878db9d6", 00:21:57.116 "strip_size_kb": 64, 00:21:57.116 "state": "online", 00:21:57.116 "raid_level": "raid5f", 00:21:57.116 "superblock": false, 00:21:57.116 "num_base_bdevs": 3, 00:21:57.116 "num_base_bdevs_discovered": 3, 00:21:57.116 "num_base_bdevs_operational": 3, 00:21:57.116 "base_bdevs_list": [ 00:21:57.116 { 00:21:57.116 "name": "spare", 00:21:57.116 "uuid": "a7806144-68b9-5668-9e8e-ff6da448ef5b", 00:21:57.116 "is_configured": true, 00:21:57.116 "data_offset": 0, 00:21:57.116 "data_size": 65536 00:21:57.116 }, 00:21:57.116 { 00:21:57.116 "name": "BaseBdev2", 00:21:57.116 "uuid": "f8b195e7-48d8-58fc-ac6c-fe788fab1260", 00:21:57.116 "is_configured": true, 00:21:57.116 "data_offset": 0, 00:21:57.116 "data_size": 65536 00:21:57.116 }, 00:21:57.116 { 00:21:57.116 "name": "BaseBdev3", 00:21:57.116 "uuid": "6b997d11-f00d-5386-ad1b-d279472a9982", 00:21:57.116 "is_configured": true, 00:21:57.116 "data_offset": 0, 00:21:57.116 "data_size": 65536 00:21:57.116 } 00:21:57.116 ] 00:21:57.116 }' 00:21:57.116 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.116 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.684 [2024-12-06 13:16:44.529491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.684 [2024-12-06 13:16:44.529537] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.684 [2024-12-06 13:16:44.529643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.684 [2024-12-06 13:16:44.529751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.684 [2024-12-06 13:16:44.529776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:57.684 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:58.258 /dev/nbd0 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:58.258 1+0 records in 00:21:58.258 1+0 records out 00:21:58.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412732 s, 9.9 MB/s 00:21:58.258 13:16:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.258 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:58.258 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.258 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:58.258 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:58.258 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:58.258 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:58.258 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:58.516 /dev/nbd1 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:58.517 1+0 records in 00:21:58.517 1+0 records out 00:21:58.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402707 s, 10.2 MB/s 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:58.517 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:59.083 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:59.083 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:59.083 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:59.083 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:59.083 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:59.083 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:59.083 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:59.083 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:59.083 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:59.083 13:16:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82299 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82299 ']' 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82299 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82299 00:21:59.343 killing process with pid 82299 00:21:59.343 Received shutdown signal, test time was about 60.000000 seconds 00:21:59.343 00:21:59.343 Latency(us) 00:21:59.343 [2024-12-06T13:16:46.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.343 [2024-12-06T13:16:46.359Z] =================================================================================================================== 00:21:59.343 [2024-12-06T13:16:46.359Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82299' 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82299 00:21:59.343 13:16:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82299 00:21:59.343 [2024-12-06 13:16:46.218899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:59.602 [2024-12-06 13:16:46.579261] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:01.027 00:22:01.027 real 0m16.947s 00:22:01.027 user 0m21.678s 00:22:01.027 sys 0m2.240s 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.027 ************************************ 00:22:01.027 END TEST raid5f_rebuild_test 00:22:01.027 ************************************ 00:22:01.027 13:16:47 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:22:01.027 13:16:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:01.027 13:16:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.027 13:16:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:01.027 ************************************ 00:22:01.027 START TEST raid5f_rebuild_test_sb 00:22:01.027 ************************************ 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82749 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82749 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82749 ']' 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.027 13:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.027 [2024-12-06 13:16:47.866143] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:22:01.027 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:01.027 Zero copy mechanism will not be used. 00:22:01.027 [2024-12-06 13:16:47.866346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82749 ] 00:22:01.287 [2024-12-06 13:16:48.056572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.287 [2024-12-06 13:16:48.198950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.546 [2024-12-06 13:16:48.416919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:01.546 [2024-12-06 13:16:48.416990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:02.148 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.148 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:02.148 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:02.148 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:02.148 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.148 13:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.148 BaseBdev1_malloc 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.148 [2024-12-06 13:16:49.029996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:02.148 [2024-12-06 13:16:49.030078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.148 [2024-12-06 13:16:49.030112] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:02.148 [2024-12-06 13:16:49.030130] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.148 [2024-12-06 13:16:49.033300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.148 [2024-12-06 13:16:49.033352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:02.148 BaseBdev1 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.148 BaseBdev2_malloc 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.148 [2024-12-06 13:16:49.088739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:02.148 [2024-12-06 13:16:49.088828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.148 [2024-12-06 13:16:49.088873] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:02.148 [2024-12-06 13:16:49.088891] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.148 [2024-12-06 13:16:49.091909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.148 [2024-12-06 13:16:49.091951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:02.148 BaseBdev2 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.148 BaseBdev3_malloc 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.148 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.407 [2024-12-06 13:16:49.165044] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:02.407 [2024-12-06 13:16:49.165136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.407 [2024-12-06 13:16:49.165171] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:02.407 [2024-12-06 13:16:49.165191] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.407 [2024-12-06 13:16:49.168193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.407 [2024-12-06 13:16:49.168239] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:02.407 BaseBdev3 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.407 spare_malloc 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.407 spare_delay 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.407 [2024-12-06 13:16:49.231999] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:02.407 [2024-12-06 13:16:49.232073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.407 [2024-12-06 13:16:49.232105] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:02.407 [2024-12-06 13:16:49.232125] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.407 [2024-12-06 13:16:49.235253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.407 [2024-12-06 13:16:49.235303] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:02.407 spare 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.407 [2024-12-06 13:16:49.240165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.407 [2024-12-06 13:16:49.242786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.407 [2024-12-06 13:16:49.242895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.407 [2024-12-06 13:16:49.243152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:02.407 [2024-12-06 13:16:49.243180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:02.407 [2024-12-06 13:16:49.243526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:02.407 [2024-12-06 13:16:49.248907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:02.407 [2024-12-06 13:16:49.248945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:02.407 [2024-12-06 13:16:49.249179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.407 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.408 "name": "raid_bdev1", 00:22:02.408 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:02.408 "strip_size_kb": 64, 00:22:02.408 "state": "online", 00:22:02.408 "raid_level": "raid5f", 00:22:02.408 "superblock": true, 00:22:02.408 "num_base_bdevs": 3, 00:22:02.408 "num_base_bdevs_discovered": 3, 00:22:02.408 "num_base_bdevs_operational": 3, 00:22:02.408 "base_bdevs_list": [ 00:22:02.408 { 00:22:02.408 "name": "BaseBdev1", 00:22:02.408 "uuid": "1129a710-2e3a-5c2d-b21b-112eaff5032c", 00:22:02.408 "is_configured": true, 00:22:02.408 "data_offset": 2048, 00:22:02.408 "data_size": 63488 00:22:02.408 }, 00:22:02.408 { 00:22:02.408 "name": "BaseBdev2", 00:22:02.408 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:02.408 "is_configured": true, 00:22:02.408 "data_offset": 2048, 00:22:02.408 "data_size": 63488 00:22:02.408 }, 00:22:02.408 { 00:22:02.408 "name": "BaseBdev3", 00:22:02.408 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:02.408 "is_configured": true, 00:22:02.408 "data_offset": 2048, 00:22:02.408 "data_size": 63488 00:22:02.408 } 00:22:02.408 ] 00:22:02.408 }' 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.408 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.975 [2024-12-06 13:16:49.783693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:02.975 13:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:03.233 [2024-12-06 13:16:50.192041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:03.233 /dev/nbd0 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:03.233 1+0 records in 00:22:03.233 1+0 records out 00:22:03.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484077 s, 8.5 MB/s 00:22:03.233 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:03.492 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:03.492 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:03.492 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:03.492 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:03.492 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:03.492 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:03.492 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:03.492 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:22:03.492 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:22:03.492 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:22:03.751 496+0 records in 00:22:03.751 496+0 records out 00:22:03.751 65011712 bytes (65 MB, 62 MiB) copied, 0.438529 s, 148 MB/s 00:22:03.751 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:03.751 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:03.751 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:03.751 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:03.751 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:03.751 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:03.751 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:04.010 [2024-12-06 13:16:50.974294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:04.010 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:04.010 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:04.010 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:04.010 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:04.010 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:04.010 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:04.010 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:04.010 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:04.010 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:04.010 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.010 13:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.010 [2024-12-06 13:16:50.996677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.010 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.270 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.270 "name": "raid_bdev1", 00:22:04.270 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:04.270 "strip_size_kb": 64, 00:22:04.270 "state": "online", 00:22:04.270 "raid_level": "raid5f", 00:22:04.270 "superblock": true, 00:22:04.270 "num_base_bdevs": 3, 00:22:04.270 "num_base_bdevs_discovered": 2, 00:22:04.270 "num_base_bdevs_operational": 2, 00:22:04.270 "base_bdevs_list": [ 00:22:04.270 { 00:22:04.270 "name": null, 00:22:04.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.270 "is_configured": false, 00:22:04.270 "data_offset": 0, 00:22:04.270 "data_size": 63488 00:22:04.270 }, 00:22:04.270 { 00:22:04.270 "name": "BaseBdev2", 00:22:04.270 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:04.270 "is_configured": true, 00:22:04.270 "data_offset": 2048, 00:22:04.270 "data_size": 63488 00:22:04.270 }, 00:22:04.270 { 00:22:04.270 "name": "BaseBdev3", 00:22:04.270 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:04.270 "is_configured": true, 00:22:04.270 "data_offset": 2048, 00:22:04.270 "data_size": 63488 00:22:04.270 } 00:22:04.270 ] 00:22:04.270 }' 00:22:04.270 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.270 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.528 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:04.528 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.528 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.528 [2024-12-06 13:16:51.512803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:04.528 [2024-12-06 13:16:51.529160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:22:04.528 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.528 13:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:04.528 [2024-12-06 13:16:51.537037] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:05.906 "name": "raid_bdev1", 00:22:05.906 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:05.906 "strip_size_kb": 64, 00:22:05.906 "state": "online", 00:22:05.906 "raid_level": "raid5f", 00:22:05.906 "superblock": true, 00:22:05.906 "num_base_bdevs": 3, 00:22:05.906 "num_base_bdevs_discovered": 3, 00:22:05.906 "num_base_bdevs_operational": 3, 00:22:05.906 "process": { 00:22:05.906 "type": "rebuild", 00:22:05.906 "target": "spare", 00:22:05.906 "progress": { 00:22:05.906 "blocks": 18432, 00:22:05.906 "percent": 14 00:22:05.906 } 00:22:05.906 }, 00:22:05.906 "base_bdevs_list": [ 00:22:05.906 { 00:22:05.906 "name": "spare", 00:22:05.906 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:05.906 "is_configured": true, 00:22:05.906 "data_offset": 2048, 00:22:05.906 "data_size": 63488 00:22:05.906 }, 00:22:05.906 { 00:22:05.906 "name": "BaseBdev2", 00:22:05.906 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:05.906 "is_configured": true, 00:22:05.906 "data_offset": 2048, 00:22:05.906 "data_size": 63488 00:22:05.906 }, 00:22:05.906 { 00:22:05.906 "name": "BaseBdev3", 00:22:05.906 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:05.906 "is_configured": true, 00:22:05.906 "data_offset": 2048, 00:22:05.906 "data_size": 63488 00:22:05.906 } 00:22:05.906 ] 00:22:05.906 }' 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.906 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.906 [2024-12-06 13:16:52.704204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:05.906 [2024-12-06 13:16:52.755486] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:05.907 [2024-12-06 13:16:52.755602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.907 [2024-12-06 13:16:52.755635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:05.907 [2024-12-06 13:16:52.755648] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.907 "name": "raid_bdev1", 00:22:05.907 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:05.907 "strip_size_kb": 64, 00:22:05.907 "state": "online", 00:22:05.907 "raid_level": "raid5f", 00:22:05.907 "superblock": true, 00:22:05.907 "num_base_bdevs": 3, 00:22:05.907 "num_base_bdevs_discovered": 2, 00:22:05.907 "num_base_bdevs_operational": 2, 00:22:05.907 "base_bdevs_list": [ 00:22:05.907 { 00:22:05.907 "name": null, 00:22:05.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.907 "is_configured": false, 00:22:05.907 "data_offset": 0, 00:22:05.907 "data_size": 63488 00:22:05.907 }, 00:22:05.907 { 00:22:05.907 "name": "BaseBdev2", 00:22:05.907 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:05.907 "is_configured": true, 00:22:05.907 "data_offset": 2048, 00:22:05.907 "data_size": 63488 00:22:05.907 }, 00:22:05.907 { 00:22:05.907 "name": "BaseBdev3", 00:22:05.907 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:05.907 "is_configured": true, 00:22:05.907 "data_offset": 2048, 00:22:05.907 "data_size": 63488 00:22:05.907 } 00:22:05.907 ] 00:22:05.907 }' 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.907 13:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:06.475 "name": "raid_bdev1", 00:22:06.475 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:06.475 "strip_size_kb": 64, 00:22:06.475 "state": "online", 00:22:06.475 "raid_level": "raid5f", 00:22:06.475 "superblock": true, 00:22:06.475 "num_base_bdevs": 3, 00:22:06.475 "num_base_bdevs_discovered": 2, 00:22:06.475 "num_base_bdevs_operational": 2, 00:22:06.475 "base_bdevs_list": [ 00:22:06.475 { 00:22:06.475 "name": null, 00:22:06.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.475 "is_configured": false, 00:22:06.475 "data_offset": 0, 00:22:06.475 "data_size": 63488 00:22:06.475 }, 00:22:06.475 { 00:22:06.475 "name": "BaseBdev2", 00:22:06.475 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:06.475 "is_configured": true, 00:22:06.475 "data_offset": 2048, 00:22:06.475 "data_size": 63488 00:22:06.475 }, 00:22:06.475 { 00:22:06.475 "name": "BaseBdev3", 00:22:06.475 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:06.475 "is_configured": true, 00:22:06.475 "data_offset": 2048, 00:22:06.475 "data_size": 63488 00:22:06.475 } 00:22:06.475 ] 00:22:06.475 }' 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.475 [2024-12-06 13:16:53.469316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:06.475 [2024-12-06 13:16:53.484828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.475 13:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:06.734 [2024-12-06 13:16:53.492567] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:07.671 "name": "raid_bdev1", 00:22:07.671 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:07.671 "strip_size_kb": 64, 00:22:07.671 "state": "online", 00:22:07.671 "raid_level": "raid5f", 00:22:07.671 "superblock": true, 00:22:07.671 "num_base_bdevs": 3, 00:22:07.671 "num_base_bdevs_discovered": 3, 00:22:07.671 "num_base_bdevs_operational": 3, 00:22:07.671 "process": { 00:22:07.671 "type": "rebuild", 00:22:07.671 "target": "spare", 00:22:07.671 "progress": { 00:22:07.671 "blocks": 18432, 00:22:07.671 "percent": 14 00:22:07.671 } 00:22:07.671 }, 00:22:07.671 "base_bdevs_list": [ 00:22:07.671 { 00:22:07.671 "name": "spare", 00:22:07.671 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:07.671 "is_configured": true, 00:22:07.671 "data_offset": 2048, 00:22:07.671 "data_size": 63488 00:22:07.671 }, 00:22:07.671 { 00:22:07.671 "name": "BaseBdev2", 00:22:07.671 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:07.671 "is_configured": true, 00:22:07.671 "data_offset": 2048, 00:22:07.671 "data_size": 63488 00:22:07.671 }, 00:22:07.671 { 00:22:07.671 "name": "BaseBdev3", 00:22:07.671 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:07.671 "is_configured": true, 00:22:07.671 "data_offset": 2048, 00:22:07.671 "data_size": 63488 00:22:07.671 } 00:22:07.671 ] 00:22:07.671 }' 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:07.671 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=625 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.671 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.929 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.929 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:07.929 "name": "raid_bdev1", 00:22:07.929 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:07.929 "strip_size_kb": 64, 00:22:07.929 "state": "online", 00:22:07.929 "raid_level": "raid5f", 00:22:07.929 "superblock": true, 00:22:07.929 "num_base_bdevs": 3, 00:22:07.929 "num_base_bdevs_discovered": 3, 00:22:07.930 "num_base_bdevs_operational": 3, 00:22:07.930 "process": { 00:22:07.930 "type": "rebuild", 00:22:07.930 "target": "spare", 00:22:07.930 "progress": { 00:22:07.930 "blocks": 22528, 00:22:07.930 "percent": 17 00:22:07.930 } 00:22:07.930 }, 00:22:07.930 "base_bdevs_list": [ 00:22:07.930 { 00:22:07.930 "name": "spare", 00:22:07.930 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:07.930 "is_configured": true, 00:22:07.930 "data_offset": 2048, 00:22:07.930 "data_size": 63488 00:22:07.930 }, 00:22:07.930 { 00:22:07.930 "name": "BaseBdev2", 00:22:07.930 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:07.930 "is_configured": true, 00:22:07.930 "data_offset": 2048, 00:22:07.930 "data_size": 63488 00:22:07.930 }, 00:22:07.930 { 00:22:07.930 "name": "BaseBdev3", 00:22:07.930 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:07.930 "is_configured": true, 00:22:07.930 "data_offset": 2048, 00:22:07.930 "data_size": 63488 00:22:07.930 } 00:22:07.930 ] 00:22:07.930 }' 00:22:07.930 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:07.930 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:07.930 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:07.930 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:07.930 13:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:08.862 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:08.862 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:08.862 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:08.862 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:08.862 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:08.862 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:08.862 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.862 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.862 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.862 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.862 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.120 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:09.120 "name": "raid_bdev1", 00:22:09.120 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:09.120 "strip_size_kb": 64, 00:22:09.120 "state": "online", 00:22:09.120 "raid_level": "raid5f", 00:22:09.120 "superblock": true, 00:22:09.120 "num_base_bdevs": 3, 00:22:09.120 "num_base_bdevs_discovered": 3, 00:22:09.120 "num_base_bdevs_operational": 3, 00:22:09.120 "process": { 00:22:09.120 "type": "rebuild", 00:22:09.120 "target": "spare", 00:22:09.120 "progress": { 00:22:09.120 "blocks": 47104, 00:22:09.120 "percent": 37 00:22:09.120 } 00:22:09.120 }, 00:22:09.120 "base_bdevs_list": [ 00:22:09.120 { 00:22:09.120 "name": "spare", 00:22:09.120 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:09.120 "is_configured": true, 00:22:09.120 "data_offset": 2048, 00:22:09.120 "data_size": 63488 00:22:09.120 }, 00:22:09.120 { 00:22:09.120 "name": "BaseBdev2", 00:22:09.120 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:09.120 "is_configured": true, 00:22:09.120 "data_offset": 2048, 00:22:09.120 "data_size": 63488 00:22:09.120 }, 00:22:09.120 { 00:22:09.120 "name": "BaseBdev3", 00:22:09.120 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:09.120 "is_configured": true, 00:22:09.120 "data_offset": 2048, 00:22:09.120 "data_size": 63488 00:22:09.120 } 00:22:09.120 ] 00:22:09.120 }' 00:22:09.120 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:09.120 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:09.120 13:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:09.120 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:09.120 13:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:10.053 "name": "raid_bdev1", 00:22:10.053 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:10.053 "strip_size_kb": 64, 00:22:10.053 "state": "online", 00:22:10.053 "raid_level": "raid5f", 00:22:10.053 "superblock": true, 00:22:10.053 "num_base_bdevs": 3, 00:22:10.053 "num_base_bdevs_discovered": 3, 00:22:10.053 "num_base_bdevs_operational": 3, 00:22:10.053 "process": { 00:22:10.053 "type": "rebuild", 00:22:10.053 "target": "spare", 00:22:10.053 "progress": { 00:22:10.053 "blocks": 69632, 00:22:10.053 "percent": 54 00:22:10.053 } 00:22:10.053 }, 00:22:10.053 "base_bdevs_list": [ 00:22:10.053 { 00:22:10.053 "name": "spare", 00:22:10.053 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:10.053 "is_configured": true, 00:22:10.053 "data_offset": 2048, 00:22:10.053 "data_size": 63488 00:22:10.053 }, 00:22:10.053 { 00:22:10.053 "name": "BaseBdev2", 00:22:10.053 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:10.053 "is_configured": true, 00:22:10.053 "data_offset": 2048, 00:22:10.053 "data_size": 63488 00:22:10.053 }, 00:22:10.053 { 00:22:10.053 "name": "BaseBdev3", 00:22:10.053 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:10.053 "is_configured": true, 00:22:10.053 "data_offset": 2048, 00:22:10.053 "data_size": 63488 00:22:10.053 } 00:22:10.053 ] 00:22:10.053 }' 00:22:10.053 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:10.312 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.312 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:10.312 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.312 13:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.247 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:11.247 "name": "raid_bdev1", 00:22:11.247 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:11.247 "strip_size_kb": 64, 00:22:11.247 "state": "online", 00:22:11.247 "raid_level": "raid5f", 00:22:11.247 "superblock": true, 00:22:11.247 "num_base_bdevs": 3, 00:22:11.247 "num_base_bdevs_discovered": 3, 00:22:11.247 "num_base_bdevs_operational": 3, 00:22:11.247 "process": { 00:22:11.247 "type": "rebuild", 00:22:11.247 "target": "spare", 00:22:11.248 "progress": { 00:22:11.248 "blocks": 92160, 00:22:11.248 "percent": 72 00:22:11.248 } 00:22:11.248 }, 00:22:11.248 "base_bdevs_list": [ 00:22:11.248 { 00:22:11.248 "name": "spare", 00:22:11.248 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:11.248 "is_configured": true, 00:22:11.248 "data_offset": 2048, 00:22:11.248 "data_size": 63488 00:22:11.248 }, 00:22:11.248 { 00:22:11.248 "name": "BaseBdev2", 00:22:11.248 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:11.248 "is_configured": true, 00:22:11.248 "data_offset": 2048, 00:22:11.248 "data_size": 63488 00:22:11.248 }, 00:22:11.248 { 00:22:11.248 "name": "BaseBdev3", 00:22:11.248 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:11.248 "is_configured": true, 00:22:11.248 "data_offset": 2048, 00:22:11.248 "data_size": 63488 00:22:11.248 } 00:22:11.248 ] 00:22:11.248 }' 00:22:11.248 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:11.248 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.248 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:11.506 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.506 13:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.440 "name": "raid_bdev1", 00:22:12.440 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:12.440 "strip_size_kb": 64, 00:22:12.440 "state": "online", 00:22:12.440 "raid_level": "raid5f", 00:22:12.440 "superblock": true, 00:22:12.440 "num_base_bdevs": 3, 00:22:12.440 "num_base_bdevs_discovered": 3, 00:22:12.440 "num_base_bdevs_operational": 3, 00:22:12.440 "process": { 00:22:12.440 "type": "rebuild", 00:22:12.440 "target": "spare", 00:22:12.440 "progress": { 00:22:12.440 "blocks": 116736, 00:22:12.440 "percent": 91 00:22:12.440 } 00:22:12.440 }, 00:22:12.440 "base_bdevs_list": [ 00:22:12.440 { 00:22:12.440 "name": "spare", 00:22:12.440 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:12.440 "is_configured": true, 00:22:12.440 "data_offset": 2048, 00:22:12.440 "data_size": 63488 00:22:12.440 }, 00:22:12.440 { 00:22:12.440 "name": "BaseBdev2", 00:22:12.440 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:12.440 "is_configured": true, 00:22:12.440 "data_offset": 2048, 00:22:12.440 "data_size": 63488 00:22:12.440 }, 00:22:12.440 { 00:22:12.440 "name": "BaseBdev3", 00:22:12.440 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:12.440 "is_configured": true, 00:22:12.440 "data_offset": 2048, 00:22:12.440 "data_size": 63488 00:22:12.440 } 00:22:12.440 ] 00:22:12.440 }' 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.440 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.698 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.698 13:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:12.957 [2024-12-06 13:16:59.792643] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:12.957 [2024-12-06 13:16:59.792810] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:12.957 [2024-12-06 13:16:59.793063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.614 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:13.614 "name": "raid_bdev1", 00:22:13.614 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:13.614 "strip_size_kb": 64, 00:22:13.614 "state": "online", 00:22:13.614 "raid_level": "raid5f", 00:22:13.614 "superblock": true, 00:22:13.615 "num_base_bdevs": 3, 00:22:13.615 "num_base_bdevs_discovered": 3, 00:22:13.615 "num_base_bdevs_operational": 3, 00:22:13.615 "base_bdevs_list": [ 00:22:13.615 { 00:22:13.615 "name": "spare", 00:22:13.615 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:13.615 "is_configured": true, 00:22:13.615 "data_offset": 2048, 00:22:13.615 "data_size": 63488 00:22:13.615 }, 00:22:13.615 { 00:22:13.615 "name": "BaseBdev2", 00:22:13.615 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:13.615 "is_configured": true, 00:22:13.615 "data_offset": 2048, 00:22:13.615 "data_size": 63488 00:22:13.615 }, 00:22:13.615 { 00:22:13.615 "name": "BaseBdev3", 00:22:13.615 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:13.615 "is_configured": true, 00:22:13.615 "data_offset": 2048, 00:22:13.615 "data_size": 63488 00:22:13.615 } 00:22:13.615 ] 00:22:13.615 }' 00:22:13.615 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:13.615 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:13.615 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:13.873 "name": "raid_bdev1", 00:22:13.873 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:13.873 "strip_size_kb": 64, 00:22:13.873 "state": "online", 00:22:13.873 "raid_level": "raid5f", 00:22:13.873 "superblock": true, 00:22:13.873 "num_base_bdevs": 3, 00:22:13.873 "num_base_bdevs_discovered": 3, 00:22:13.873 "num_base_bdevs_operational": 3, 00:22:13.873 "base_bdevs_list": [ 00:22:13.873 { 00:22:13.873 "name": "spare", 00:22:13.873 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:13.873 "is_configured": true, 00:22:13.873 "data_offset": 2048, 00:22:13.873 "data_size": 63488 00:22:13.873 }, 00:22:13.873 { 00:22:13.873 "name": "BaseBdev2", 00:22:13.873 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:13.873 "is_configured": true, 00:22:13.873 "data_offset": 2048, 00:22:13.873 "data_size": 63488 00:22:13.873 }, 00:22:13.873 { 00:22:13.873 "name": "BaseBdev3", 00:22:13.873 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:13.873 "is_configured": true, 00:22:13.873 "data_offset": 2048, 00:22:13.873 "data_size": 63488 00:22:13.873 } 00:22:13.873 ] 00:22:13.873 }' 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.873 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.873 "name": "raid_bdev1", 00:22:13.873 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:13.873 "strip_size_kb": 64, 00:22:13.873 "state": "online", 00:22:13.873 "raid_level": "raid5f", 00:22:13.873 "superblock": true, 00:22:13.873 "num_base_bdevs": 3, 00:22:13.873 "num_base_bdevs_discovered": 3, 00:22:13.873 "num_base_bdevs_operational": 3, 00:22:13.873 "base_bdevs_list": [ 00:22:13.873 { 00:22:13.874 "name": "spare", 00:22:13.874 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:13.874 "is_configured": true, 00:22:13.874 "data_offset": 2048, 00:22:13.874 "data_size": 63488 00:22:13.874 }, 00:22:13.874 { 00:22:13.874 "name": "BaseBdev2", 00:22:13.874 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:13.874 "is_configured": true, 00:22:13.874 "data_offset": 2048, 00:22:13.874 "data_size": 63488 00:22:13.874 }, 00:22:13.874 { 00:22:13.874 "name": "BaseBdev3", 00:22:13.874 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:13.874 "is_configured": true, 00:22:13.874 "data_offset": 2048, 00:22:13.874 "data_size": 63488 00:22:13.874 } 00:22:13.874 ] 00:22:13.874 }' 00:22:13.874 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.874 13:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.439 [2024-12-06 13:17:01.292741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:14.439 [2024-12-06 13:17:01.292789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.439 [2024-12-06 13:17:01.292922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.439 [2024-12-06 13:17:01.293059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.439 [2024-12-06 13:17:01.293095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:14.439 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:14.440 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:14.440 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:14.440 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:14.440 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:14.440 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:14.440 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:14.440 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:14.440 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:14.698 /dev/nbd0 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:14.698 1+0 records in 00:22:14.698 1+0 records out 00:22:14.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430728 s, 9.5 MB/s 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:14.698 13:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:15.264 /dev/nbd1 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:15.264 1+0 records in 00:22:15.264 1+0 records out 00:22:15.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430544 s, 9.5 MB/s 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:15.264 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:15.522 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:15.522 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:15.522 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:15.522 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:15.522 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:15.522 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.522 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:15.781 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:15.781 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:15.781 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:15.781 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.781 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.781 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:15.781 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:15.781 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.781 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.781 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.040 [2024-12-06 13:17:02.917602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:16.040 [2024-12-06 13:17:02.917713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.040 [2024-12-06 13:17:02.917747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:16.040 [2024-12-06 13:17:02.917765] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.040 [2024-12-06 13:17:02.920949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.040 [2024-12-06 13:17:02.920995] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:16.040 [2024-12-06 13:17:02.921130] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:16.040 [2024-12-06 13:17:02.921202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:16.040 [2024-12-06 13:17:02.921395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:16.040 [2024-12-06 13:17:02.921603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:16.040 spare 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.040 13:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.040 [2024-12-06 13:17:03.021809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:16.040 [2024-12-06 13:17:03.021904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:16.040 [2024-12-06 13:17:03.022423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:22:16.040 [2024-12-06 13:17:03.027604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:16.040 [2024-12-06 13:17:03.027632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:16.040 [2024-12-06 13:17:03.027945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.040 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.299 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.299 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.299 "name": "raid_bdev1", 00:22:16.299 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:16.299 "strip_size_kb": 64, 00:22:16.299 "state": "online", 00:22:16.299 "raid_level": "raid5f", 00:22:16.299 "superblock": true, 00:22:16.299 "num_base_bdevs": 3, 00:22:16.299 "num_base_bdevs_discovered": 3, 00:22:16.299 "num_base_bdevs_operational": 3, 00:22:16.299 "base_bdevs_list": [ 00:22:16.299 { 00:22:16.299 "name": "spare", 00:22:16.299 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:16.299 "is_configured": true, 00:22:16.299 "data_offset": 2048, 00:22:16.299 "data_size": 63488 00:22:16.299 }, 00:22:16.299 { 00:22:16.299 "name": "BaseBdev2", 00:22:16.299 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:16.299 "is_configured": true, 00:22:16.299 "data_offset": 2048, 00:22:16.299 "data_size": 63488 00:22:16.299 }, 00:22:16.299 { 00:22:16.299 "name": "BaseBdev3", 00:22:16.299 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:16.299 "is_configured": true, 00:22:16.299 "data_offset": 2048, 00:22:16.299 "data_size": 63488 00:22:16.299 } 00:22:16.299 ] 00:22:16.299 }' 00:22:16.299 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.299 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.586 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:16.586 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:16.586 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:16.586 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:16.586 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:16.586 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.586 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.586 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.586 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.586 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:16.845 "name": "raid_bdev1", 00:22:16.845 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:16.845 "strip_size_kb": 64, 00:22:16.845 "state": "online", 00:22:16.845 "raid_level": "raid5f", 00:22:16.845 "superblock": true, 00:22:16.845 "num_base_bdevs": 3, 00:22:16.845 "num_base_bdevs_discovered": 3, 00:22:16.845 "num_base_bdevs_operational": 3, 00:22:16.845 "base_bdevs_list": [ 00:22:16.845 { 00:22:16.845 "name": "spare", 00:22:16.845 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:16.845 "is_configured": true, 00:22:16.845 "data_offset": 2048, 00:22:16.845 "data_size": 63488 00:22:16.845 }, 00:22:16.845 { 00:22:16.845 "name": "BaseBdev2", 00:22:16.845 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:16.845 "is_configured": true, 00:22:16.845 "data_offset": 2048, 00:22:16.845 "data_size": 63488 00:22:16.845 }, 00:22:16.845 { 00:22:16.845 "name": "BaseBdev3", 00:22:16.845 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:16.845 "is_configured": true, 00:22:16.845 "data_offset": 2048, 00:22:16.845 "data_size": 63488 00:22:16.845 } 00:22:16.845 ] 00:22:16.845 }' 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.845 [2024-12-06 13:17:03.774449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.845 "name": "raid_bdev1", 00:22:16.845 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:16.845 "strip_size_kb": 64, 00:22:16.845 "state": "online", 00:22:16.845 "raid_level": "raid5f", 00:22:16.845 "superblock": true, 00:22:16.845 "num_base_bdevs": 3, 00:22:16.845 "num_base_bdevs_discovered": 2, 00:22:16.845 "num_base_bdevs_operational": 2, 00:22:16.845 "base_bdevs_list": [ 00:22:16.845 { 00:22:16.845 "name": null, 00:22:16.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.845 "is_configured": false, 00:22:16.845 "data_offset": 0, 00:22:16.845 "data_size": 63488 00:22:16.845 }, 00:22:16.845 { 00:22:16.845 "name": "BaseBdev2", 00:22:16.845 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:16.845 "is_configured": true, 00:22:16.845 "data_offset": 2048, 00:22:16.845 "data_size": 63488 00:22:16.845 }, 00:22:16.845 { 00:22:16.845 "name": "BaseBdev3", 00:22:16.845 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:16.845 "is_configured": true, 00:22:16.845 "data_offset": 2048, 00:22:16.845 "data_size": 63488 00:22:16.845 } 00:22:16.845 ] 00:22:16.845 }' 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.845 13:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.412 13:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:17.412 13:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.412 13:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.412 [2024-12-06 13:17:04.270659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:17.412 [2024-12-06 13:17:04.270987] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:17.412 [2024-12-06 13:17:04.271025] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:17.412 [2024-12-06 13:17:04.271081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:17.412 [2024-12-06 13:17:04.286670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:22:17.412 13:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.412 13:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:17.412 [2024-12-06 13:17:04.294515] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:18.358 "name": "raid_bdev1", 00:22:18.358 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:18.358 "strip_size_kb": 64, 00:22:18.358 "state": "online", 00:22:18.358 "raid_level": "raid5f", 00:22:18.358 "superblock": true, 00:22:18.358 "num_base_bdevs": 3, 00:22:18.358 "num_base_bdevs_discovered": 3, 00:22:18.358 "num_base_bdevs_operational": 3, 00:22:18.358 "process": { 00:22:18.358 "type": "rebuild", 00:22:18.358 "target": "spare", 00:22:18.358 "progress": { 00:22:18.358 "blocks": 18432, 00:22:18.358 "percent": 14 00:22:18.358 } 00:22:18.358 }, 00:22:18.358 "base_bdevs_list": [ 00:22:18.358 { 00:22:18.358 "name": "spare", 00:22:18.358 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:18.358 "is_configured": true, 00:22:18.358 "data_offset": 2048, 00:22:18.358 "data_size": 63488 00:22:18.358 }, 00:22:18.358 { 00:22:18.358 "name": "BaseBdev2", 00:22:18.358 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:18.358 "is_configured": true, 00:22:18.358 "data_offset": 2048, 00:22:18.358 "data_size": 63488 00:22:18.358 }, 00:22:18.358 { 00:22:18.358 "name": "BaseBdev3", 00:22:18.358 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:18.358 "is_configured": true, 00:22:18.358 "data_offset": 2048, 00:22:18.358 "data_size": 63488 00:22:18.358 } 00:22:18.358 ] 00:22:18.358 }' 00:22:18.358 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:18.617 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:18.617 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:18.617 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:18.617 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:18.617 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.617 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.617 [2024-12-06 13:17:05.448691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:18.617 [2024-12-06 13:17:05.512945] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:18.617 [2024-12-06 13:17:05.513056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.617 [2024-12-06 13:17:05.513083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:18.617 [2024-12-06 13:17:05.513099] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:18.617 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.617 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:18.617 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.617 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.618 "name": "raid_bdev1", 00:22:18.618 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:18.618 "strip_size_kb": 64, 00:22:18.618 "state": "online", 00:22:18.618 "raid_level": "raid5f", 00:22:18.618 "superblock": true, 00:22:18.618 "num_base_bdevs": 3, 00:22:18.618 "num_base_bdevs_discovered": 2, 00:22:18.618 "num_base_bdevs_operational": 2, 00:22:18.618 "base_bdevs_list": [ 00:22:18.618 { 00:22:18.618 "name": null, 00:22:18.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.618 "is_configured": false, 00:22:18.618 "data_offset": 0, 00:22:18.618 "data_size": 63488 00:22:18.618 }, 00:22:18.618 { 00:22:18.618 "name": "BaseBdev2", 00:22:18.618 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:18.618 "is_configured": true, 00:22:18.618 "data_offset": 2048, 00:22:18.618 "data_size": 63488 00:22:18.618 }, 00:22:18.618 { 00:22:18.618 "name": "BaseBdev3", 00:22:18.618 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:18.618 "is_configured": true, 00:22:18.618 "data_offset": 2048, 00:22:18.618 "data_size": 63488 00:22:18.618 } 00:22:18.618 ] 00:22:18.618 }' 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.618 13:17:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.184 13:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:19.184 13:17:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.184 13:17:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.184 [2024-12-06 13:17:06.062608] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:19.184 [2024-12-06 13:17:06.062721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.184 [2024-12-06 13:17:06.062757] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:19.184 [2024-12-06 13:17:06.062779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.184 [2024-12-06 13:17:06.063508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.184 [2024-12-06 13:17:06.063547] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:19.184 [2024-12-06 13:17:06.063689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:19.184 [2024-12-06 13:17:06.063718] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:19.184 [2024-12-06 13:17:06.063733] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:19.184 [2024-12-06 13:17:06.063766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:19.184 [2024-12-06 13:17:06.078886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:22:19.184 spare 00:22:19.184 13:17:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.184 13:17:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:19.184 [2024-12-06 13:17:06.086287] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:20.120 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.120 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:20.120 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:20.120 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:20.120 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:20.120 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.120 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.120 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.120 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.120 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.379 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:20.379 "name": "raid_bdev1", 00:22:20.379 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:20.379 "strip_size_kb": 64, 00:22:20.379 "state": "online", 00:22:20.379 "raid_level": "raid5f", 00:22:20.379 "superblock": true, 00:22:20.379 "num_base_bdevs": 3, 00:22:20.379 "num_base_bdevs_discovered": 3, 00:22:20.379 "num_base_bdevs_operational": 3, 00:22:20.379 "process": { 00:22:20.379 "type": "rebuild", 00:22:20.379 "target": "spare", 00:22:20.379 "progress": { 00:22:20.379 "blocks": 18432, 00:22:20.379 "percent": 14 00:22:20.379 } 00:22:20.379 }, 00:22:20.379 "base_bdevs_list": [ 00:22:20.379 { 00:22:20.379 "name": "spare", 00:22:20.379 "uuid": "fc5c160a-03b9-5393-8e25-b59b1a8d5b40", 00:22:20.379 "is_configured": true, 00:22:20.379 "data_offset": 2048, 00:22:20.379 "data_size": 63488 00:22:20.379 }, 00:22:20.379 { 00:22:20.379 "name": "BaseBdev2", 00:22:20.379 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:20.379 "is_configured": true, 00:22:20.379 "data_offset": 2048, 00:22:20.379 "data_size": 63488 00:22:20.379 }, 00:22:20.379 { 00:22:20.379 "name": "BaseBdev3", 00:22:20.379 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:20.379 "is_configured": true, 00:22:20.380 "data_offset": 2048, 00:22:20.380 "data_size": 63488 00:22:20.380 } 00:22:20.380 ] 00:22:20.380 }' 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.380 [2024-12-06 13:17:07.247924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:20.380 [2024-12-06 13:17:07.304283] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:20.380 [2024-12-06 13:17:07.304415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.380 [2024-12-06 13:17:07.304446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:20.380 [2024-12-06 13:17:07.304459] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.380 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.638 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.638 "name": "raid_bdev1", 00:22:20.638 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:20.638 "strip_size_kb": 64, 00:22:20.638 "state": "online", 00:22:20.638 "raid_level": "raid5f", 00:22:20.638 "superblock": true, 00:22:20.638 "num_base_bdevs": 3, 00:22:20.638 "num_base_bdevs_discovered": 2, 00:22:20.638 "num_base_bdevs_operational": 2, 00:22:20.638 "base_bdevs_list": [ 00:22:20.638 { 00:22:20.638 "name": null, 00:22:20.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.638 "is_configured": false, 00:22:20.638 "data_offset": 0, 00:22:20.638 "data_size": 63488 00:22:20.638 }, 00:22:20.638 { 00:22:20.638 "name": "BaseBdev2", 00:22:20.638 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:20.638 "is_configured": true, 00:22:20.638 "data_offset": 2048, 00:22:20.638 "data_size": 63488 00:22:20.638 }, 00:22:20.638 { 00:22:20.638 "name": "BaseBdev3", 00:22:20.638 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:20.638 "is_configured": true, 00:22:20.638 "data_offset": 2048, 00:22:20.638 "data_size": 63488 00:22:20.638 } 00:22:20.638 ] 00:22:20.638 }' 00:22:20.638 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.638 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.898 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:20.898 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:20.898 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:20.898 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:20.898 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:20.898 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.898 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.898 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.898 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.898 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.157 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:21.157 "name": "raid_bdev1", 00:22:21.157 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:21.157 "strip_size_kb": 64, 00:22:21.157 "state": "online", 00:22:21.157 "raid_level": "raid5f", 00:22:21.157 "superblock": true, 00:22:21.157 "num_base_bdevs": 3, 00:22:21.157 "num_base_bdevs_discovered": 2, 00:22:21.157 "num_base_bdevs_operational": 2, 00:22:21.157 "base_bdevs_list": [ 00:22:21.157 { 00:22:21.157 "name": null, 00:22:21.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.157 "is_configured": false, 00:22:21.157 "data_offset": 0, 00:22:21.157 "data_size": 63488 00:22:21.157 }, 00:22:21.157 { 00:22:21.157 "name": "BaseBdev2", 00:22:21.157 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:21.157 "is_configured": true, 00:22:21.157 "data_offset": 2048, 00:22:21.157 "data_size": 63488 00:22:21.157 }, 00:22:21.157 { 00:22:21.157 "name": "BaseBdev3", 00:22:21.157 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:21.157 "is_configured": true, 00:22:21.157 "data_offset": 2048, 00:22:21.157 "data_size": 63488 00:22:21.157 } 00:22:21.157 ] 00:22:21.157 }' 00:22:21.157 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:21.157 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:21.157 13:17:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:21.157 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:21.157 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:21.158 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.158 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.158 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.158 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:21.158 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.158 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.158 [2024-12-06 13:17:08.061937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:21.158 [2024-12-06 13:17:08.062036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.158 [2024-12-06 13:17:08.062079] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:21.158 [2024-12-06 13:17:08.062095] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.158 [2024-12-06 13:17:08.062755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.158 [2024-12-06 13:17:08.062788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:21.158 [2024-12-06 13:17:08.062921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:21.158 [2024-12-06 13:17:08.062945] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:21.158 [2024-12-06 13:17:08.062972] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:21.158 [2024-12-06 13:17:08.062986] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:21.158 BaseBdev1 00:22:21.158 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.158 13:17:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.174 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.174 "name": "raid_bdev1", 00:22:22.174 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:22.174 "strip_size_kb": 64, 00:22:22.174 "state": "online", 00:22:22.174 "raid_level": "raid5f", 00:22:22.175 "superblock": true, 00:22:22.175 "num_base_bdevs": 3, 00:22:22.175 "num_base_bdevs_discovered": 2, 00:22:22.175 "num_base_bdevs_operational": 2, 00:22:22.175 "base_bdevs_list": [ 00:22:22.175 { 00:22:22.175 "name": null, 00:22:22.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.175 "is_configured": false, 00:22:22.175 "data_offset": 0, 00:22:22.175 "data_size": 63488 00:22:22.175 }, 00:22:22.175 { 00:22:22.175 "name": "BaseBdev2", 00:22:22.175 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:22.175 "is_configured": true, 00:22:22.175 "data_offset": 2048, 00:22:22.175 "data_size": 63488 00:22:22.175 }, 00:22:22.175 { 00:22:22.175 "name": "BaseBdev3", 00:22:22.175 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:22.175 "is_configured": true, 00:22:22.175 "data_offset": 2048, 00:22:22.175 "data_size": 63488 00:22:22.175 } 00:22:22.175 ] 00:22:22.175 }' 00:22:22.175 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.175 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.745 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:22.745 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:22.745 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:22.745 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:22.745 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:22.745 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.745 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.745 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.745 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.745 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.745 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:22.745 "name": "raid_bdev1", 00:22:22.745 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:22.745 "strip_size_kb": 64, 00:22:22.745 "state": "online", 00:22:22.745 "raid_level": "raid5f", 00:22:22.745 "superblock": true, 00:22:22.745 "num_base_bdevs": 3, 00:22:22.745 "num_base_bdevs_discovered": 2, 00:22:22.745 "num_base_bdevs_operational": 2, 00:22:22.745 "base_bdevs_list": [ 00:22:22.745 { 00:22:22.745 "name": null, 00:22:22.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.745 "is_configured": false, 00:22:22.745 "data_offset": 0, 00:22:22.745 "data_size": 63488 00:22:22.745 }, 00:22:22.745 { 00:22:22.745 "name": "BaseBdev2", 00:22:22.745 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:22.745 "is_configured": true, 00:22:22.745 "data_offset": 2048, 00:22:22.745 "data_size": 63488 00:22:22.745 }, 00:22:22.745 { 00:22:22.745 "name": "BaseBdev3", 00:22:22.745 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:22.745 "is_configured": true, 00:22:22.745 "data_offset": 2048, 00:22:22.745 "data_size": 63488 00:22:22.745 } 00:22:22.745 ] 00:22:22.746 }' 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.746 [2024-12-06 13:17:09.722529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.746 [2024-12-06 13:17:09.722794] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:22.746 [2024-12-06 13:17:09.722819] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:22.746 request: 00:22:22.746 { 00:22:22.746 "base_bdev": "BaseBdev1", 00:22:22.746 "raid_bdev": "raid_bdev1", 00:22:22.746 "method": "bdev_raid_add_base_bdev", 00:22:22.746 "req_id": 1 00:22:22.746 } 00:22:22.746 Got JSON-RPC error response 00:22:22.746 response: 00:22:22.746 { 00:22:22.746 "code": -22, 00:22:22.746 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:22.746 } 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:22.746 13:17:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:24.121 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:24.121 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.121 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.121 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:24.121 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:24.121 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:24.121 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.121 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.122 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.122 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.122 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.122 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.122 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.122 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.122 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.122 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.122 "name": "raid_bdev1", 00:22:24.122 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:24.122 "strip_size_kb": 64, 00:22:24.122 "state": "online", 00:22:24.122 "raid_level": "raid5f", 00:22:24.122 "superblock": true, 00:22:24.122 "num_base_bdevs": 3, 00:22:24.122 "num_base_bdevs_discovered": 2, 00:22:24.122 "num_base_bdevs_operational": 2, 00:22:24.122 "base_bdevs_list": [ 00:22:24.122 { 00:22:24.122 "name": null, 00:22:24.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.122 "is_configured": false, 00:22:24.122 "data_offset": 0, 00:22:24.122 "data_size": 63488 00:22:24.122 }, 00:22:24.122 { 00:22:24.122 "name": "BaseBdev2", 00:22:24.122 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:24.122 "is_configured": true, 00:22:24.122 "data_offset": 2048, 00:22:24.122 "data_size": 63488 00:22:24.122 }, 00:22:24.122 { 00:22:24.122 "name": "BaseBdev3", 00:22:24.122 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:24.122 "is_configured": true, 00:22:24.122 "data_offset": 2048, 00:22:24.122 "data_size": 63488 00:22:24.122 } 00:22:24.122 ] 00:22:24.122 }' 00:22:24.122 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.122 13:17:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:24.381 "name": "raid_bdev1", 00:22:24.381 "uuid": "dd7caa23-fd51-44c3-a91f-f8d32739e0db", 00:22:24.381 "strip_size_kb": 64, 00:22:24.381 "state": "online", 00:22:24.381 "raid_level": "raid5f", 00:22:24.381 "superblock": true, 00:22:24.381 "num_base_bdevs": 3, 00:22:24.381 "num_base_bdevs_discovered": 2, 00:22:24.381 "num_base_bdevs_operational": 2, 00:22:24.381 "base_bdevs_list": [ 00:22:24.381 { 00:22:24.381 "name": null, 00:22:24.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.381 "is_configured": false, 00:22:24.381 "data_offset": 0, 00:22:24.381 "data_size": 63488 00:22:24.381 }, 00:22:24.381 { 00:22:24.381 "name": "BaseBdev2", 00:22:24.381 "uuid": "7ae6d6df-6372-5e15-abca-405c329bd6c6", 00:22:24.381 "is_configured": true, 00:22:24.381 "data_offset": 2048, 00:22:24.381 "data_size": 63488 00:22:24.381 }, 00:22:24.381 { 00:22:24.381 "name": "BaseBdev3", 00:22:24.381 "uuid": "360ed481-a8e9-510e-895a-c45f12e1d9c1", 00:22:24.381 "is_configured": true, 00:22:24.381 "data_offset": 2048, 00:22:24.381 "data_size": 63488 00:22:24.381 } 00:22:24.381 ] 00:22:24.381 }' 00:22:24.381 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82749 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82749 ']' 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82749 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82749 00:22:24.640 killing process with pid 82749 00:22:24.640 Received shutdown signal, test time was about 60.000000 seconds 00:22:24.640 00:22:24.640 Latency(us) 00:22:24.640 [2024-12-06T13:17:11.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.640 [2024-12-06T13:17:11.656Z] =================================================================================================================== 00:22:24.640 [2024-12-06T13:17:11.656Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82749' 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82749 00:22:24.640 13:17:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82749 00:22:24.640 [2024-12-06 13:17:11.492260] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:24.640 [2024-12-06 13:17:11.492439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:24.640 [2024-12-06 13:17:11.492551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:24.640 [2024-12-06 13:17:11.492584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:24.899 [2024-12-06 13:17:11.871562] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:26.276 13:17:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:22:26.276 00:22:26.276 real 0m25.254s 00:22:26.276 user 0m33.571s 00:22:26.276 sys 0m2.773s 00:22:26.276 ************************************ 00:22:26.276 END TEST raid5f_rebuild_test_sb 00:22:26.276 ************************************ 00:22:26.276 13:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.276 13:17:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.276 13:17:13 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:22:26.276 13:17:13 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:22:26.276 13:17:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:26.276 13:17:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.276 13:17:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:26.276 ************************************ 00:22:26.276 START TEST raid5f_state_function_test 00:22:26.276 ************************************ 00:22:26.276 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:22:26.276 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:22:26.276 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:26.276 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:26.276 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:26.276 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83514 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83514' 00:22:26.277 Process raid pid: 83514 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83514 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83514 ']' 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.277 13:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.277 [2024-12-06 13:17:13.161224] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:22:26.277 [2024-12-06 13:17:13.161743] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.550 [2024-12-06 13:17:13.340837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.550 [2024-12-06 13:17:13.487621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.845 [2024-12-06 13:17:13.716129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:26.845 [2024-12-06 13:17:13.716200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.409 [2024-12-06 13:17:14.154242] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:27.409 [2024-12-06 13:17:14.154333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:27.409 [2024-12-06 13:17:14.154352] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:27.409 [2024-12-06 13:17:14.154370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:27.409 [2024-12-06 13:17:14.154381] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:27.409 [2024-12-06 13:17:14.154397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:27.409 [2024-12-06 13:17:14.154408] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:27.409 [2024-12-06 13:17:14.154422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.409 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.409 "name": "Existed_Raid", 00:22:27.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.409 "strip_size_kb": 64, 00:22:27.409 "state": "configuring", 00:22:27.409 "raid_level": "raid5f", 00:22:27.409 "superblock": false, 00:22:27.409 "num_base_bdevs": 4, 00:22:27.409 "num_base_bdevs_discovered": 0, 00:22:27.409 "num_base_bdevs_operational": 4, 00:22:27.409 "base_bdevs_list": [ 00:22:27.409 { 00:22:27.409 "name": "BaseBdev1", 00:22:27.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.409 "is_configured": false, 00:22:27.409 "data_offset": 0, 00:22:27.409 "data_size": 0 00:22:27.409 }, 00:22:27.409 { 00:22:27.409 "name": "BaseBdev2", 00:22:27.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.409 "is_configured": false, 00:22:27.409 "data_offset": 0, 00:22:27.409 "data_size": 0 00:22:27.409 }, 00:22:27.409 { 00:22:27.409 "name": "BaseBdev3", 00:22:27.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.409 "is_configured": false, 00:22:27.409 "data_offset": 0, 00:22:27.409 "data_size": 0 00:22:27.409 }, 00:22:27.409 { 00:22:27.410 "name": "BaseBdev4", 00:22:27.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.410 "is_configured": false, 00:22:27.410 "data_offset": 0, 00:22:27.410 "data_size": 0 00:22:27.410 } 00:22:27.410 ] 00:22:27.410 }' 00:22:27.410 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.410 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.996 [2024-12-06 13:17:14.686340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:27.996 [2024-12-06 13:17:14.686403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.996 [2024-12-06 13:17:14.698353] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:27.996 [2024-12-06 13:17:14.698428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:27.996 [2024-12-06 13:17:14.698446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:27.996 [2024-12-06 13:17:14.698463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:27.996 [2024-12-06 13:17:14.698501] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:27.996 [2024-12-06 13:17:14.698519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:27.996 [2024-12-06 13:17:14.698530] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:27.996 [2024-12-06 13:17:14.698544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.996 [2024-12-06 13:17:14.747019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:27.996 BaseBdev1 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.996 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.996 [ 00:22:27.996 { 00:22:27.996 "name": "BaseBdev1", 00:22:27.996 "aliases": [ 00:22:27.996 "ccac4750-6297-43f5-994e-651a3067bf89" 00:22:27.996 ], 00:22:27.996 "product_name": "Malloc disk", 00:22:27.996 "block_size": 512, 00:22:27.996 "num_blocks": 65536, 00:22:27.996 "uuid": "ccac4750-6297-43f5-994e-651a3067bf89", 00:22:27.996 "assigned_rate_limits": { 00:22:27.996 "rw_ios_per_sec": 0, 00:22:27.996 "rw_mbytes_per_sec": 0, 00:22:27.996 "r_mbytes_per_sec": 0, 00:22:27.996 "w_mbytes_per_sec": 0 00:22:27.996 }, 00:22:27.996 "claimed": true, 00:22:27.996 "claim_type": "exclusive_write", 00:22:27.996 "zoned": false, 00:22:27.996 "supported_io_types": { 00:22:27.996 "read": true, 00:22:27.996 "write": true, 00:22:27.996 "unmap": true, 00:22:27.996 "flush": true, 00:22:27.996 "reset": true, 00:22:27.996 "nvme_admin": false, 00:22:27.996 "nvme_io": false, 00:22:27.996 "nvme_io_md": false, 00:22:27.996 "write_zeroes": true, 00:22:27.996 "zcopy": true, 00:22:27.996 "get_zone_info": false, 00:22:27.996 "zone_management": false, 00:22:27.996 "zone_append": false, 00:22:27.996 "compare": false, 00:22:27.996 "compare_and_write": false, 00:22:27.996 "abort": true, 00:22:27.996 "seek_hole": false, 00:22:27.996 "seek_data": false, 00:22:27.996 "copy": true, 00:22:27.997 "nvme_iov_md": false 00:22:27.997 }, 00:22:27.997 "memory_domains": [ 00:22:27.997 { 00:22:27.997 "dma_device_id": "system", 00:22:27.997 "dma_device_type": 1 00:22:27.997 }, 00:22:27.997 { 00:22:27.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:27.997 "dma_device_type": 2 00:22:27.997 } 00:22:27.997 ], 00:22:27.997 "driver_specific": {} 00:22:27.997 } 00:22:27.997 ] 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:27.997 "name": "Existed_Raid", 00:22:27.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.997 "strip_size_kb": 64, 00:22:27.997 "state": "configuring", 00:22:27.997 "raid_level": "raid5f", 00:22:27.997 "superblock": false, 00:22:27.997 "num_base_bdevs": 4, 00:22:27.997 "num_base_bdevs_discovered": 1, 00:22:27.997 "num_base_bdevs_operational": 4, 00:22:27.997 "base_bdevs_list": [ 00:22:27.997 { 00:22:27.997 "name": "BaseBdev1", 00:22:27.997 "uuid": "ccac4750-6297-43f5-994e-651a3067bf89", 00:22:27.997 "is_configured": true, 00:22:27.997 "data_offset": 0, 00:22:27.997 "data_size": 65536 00:22:27.997 }, 00:22:27.997 { 00:22:27.997 "name": "BaseBdev2", 00:22:27.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.997 "is_configured": false, 00:22:27.997 "data_offset": 0, 00:22:27.997 "data_size": 0 00:22:27.997 }, 00:22:27.997 { 00:22:27.997 "name": "BaseBdev3", 00:22:27.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.997 "is_configured": false, 00:22:27.997 "data_offset": 0, 00:22:27.997 "data_size": 0 00:22:27.997 }, 00:22:27.997 { 00:22:27.997 "name": "BaseBdev4", 00:22:27.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.997 "is_configured": false, 00:22:27.997 "data_offset": 0, 00:22:27.997 "data_size": 0 00:22:27.997 } 00:22:27.997 ] 00:22:27.997 }' 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:27.997 13:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.571 [2024-12-06 13:17:15.291271] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:28.571 [2024-12-06 13:17:15.291355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.571 [2024-12-06 13:17:15.299327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:28.571 [2024-12-06 13:17:15.302244] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:28.571 [2024-12-06 13:17:15.302453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:28.571 [2024-12-06 13:17:15.302655] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:28.571 [2024-12-06 13:17:15.302832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:28.571 [2024-12-06 13:17:15.302988] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:28.571 [2024-12-06 13:17:15.303166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.571 "name": "Existed_Raid", 00:22:28.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.571 "strip_size_kb": 64, 00:22:28.571 "state": "configuring", 00:22:28.571 "raid_level": "raid5f", 00:22:28.571 "superblock": false, 00:22:28.571 "num_base_bdevs": 4, 00:22:28.571 "num_base_bdevs_discovered": 1, 00:22:28.571 "num_base_bdevs_operational": 4, 00:22:28.571 "base_bdevs_list": [ 00:22:28.571 { 00:22:28.571 "name": "BaseBdev1", 00:22:28.571 "uuid": "ccac4750-6297-43f5-994e-651a3067bf89", 00:22:28.571 "is_configured": true, 00:22:28.571 "data_offset": 0, 00:22:28.571 "data_size": 65536 00:22:28.571 }, 00:22:28.571 { 00:22:28.571 "name": "BaseBdev2", 00:22:28.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.571 "is_configured": false, 00:22:28.571 "data_offset": 0, 00:22:28.571 "data_size": 0 00:22:28.571 }, 00:22:28.571 { 00:22:28.571 "name": "BaseBdev3", 00:22:28.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.571 "is_configured": false, 00:22:28.571 "data_offset": 0, 00:22:28.571 "data_size": 0 00:22:28.571 }, 00:22:28.571 { 00:22:28.571 "name": "BaseBdev4", 00:22:28.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.571 "is_configured": false, 00:22:28.571 "data_offset": 0, 00:22:28.571 "data_size": 0 00:22:28.571 } 00:22:28.571 ] 00:22:28.571 }' 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.571 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.137 [2024-12-06 13:17:15.894431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:29.137 BaseBdev2 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.137 [ 00:22:29.137 { 00:22:29.137 "name": "BaseBdev2", 00:22:29.137 "aliases": [ 00:22:29.137 "eb7a0af7-417c-4383-a046-8d3579c20dd5" 00:22:29.137 ], 00:22:29.137 "product_name": "Malloc disk", 00:22:29.137 "block_size": 512, 00:22:29.137 "num_blocks": 65536, 00:22:29.137 "uuid": "eb7a0af7-417c-4383-a046-8d3579c20dd5", 00:22:29.137 "assigned_rate_limits": { 00:22:29.137 "rw_ios_per_sec": 0, 00:22:29.137 "rw_mbytes_per_sec": 0, 00:22:29.137 "r_mbytes_per_sec": 0, 00:22:29.137 "w_mbytes_per_sec": 0 00:22:29.137 }, 00:22:29.137 "claimed": true, 00:22:29.137 "claim_type": "exclusive_write", 00:22:29.137 "zoned": false, 00:22:29.137 "supported_io_types": { 00:22:29.137 "read": true, 00:22:29.137 "write": true, 00:22:29.137 "unmap": true, 00:22:29.137 "flush": true, 00:22:29.137 "reset": true, 00:22:29.137 "nvme_admin": false, 00:22:29.137 "nvme_io": false, 00:22:29.137 "nvme_io_md": false, 00:22:29.137 "write_zeroes": true, 00:22:29.137 "zcopy": true, 00:22:29.137 "get_zone_info": false, 00:22:29.137 "zone_management": false, 00:22:29.137 "zone_append": false, 00:22:29.137 "compare": false, 00:22:29.137 "compare_and_write": false, 00:22:29.137 "abort": true, 00:22:29.137 "seek_hole": false, 00:22:29.137 "seek_data": false, 00:22:29.137 "copy": true, 00:22:29.137 "nvme_iov_md": false 00:22:29.137 }, 00:22:29.137 "memory_domains": [ 00:22:29.137 { 00:22:29.137 "dma_device_id": "system", 00:22:29.137 "dma_device_type": 1 00:22:29.137 }, 00:22:29.137 { 00:22:29.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.137 "dma_device_type": 2 00:22:29.137 } 00:22:29.137 ], 00:22:29.137 "driver_specific": {} 00:22:29.137 } 00:22:29.137 ] 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.137 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.137 "name": "Existed_Raid", 00:22:29.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.137 "strip_size_kb": 64, 00:22:29.137 "state": "configuring", 00:22:29.137 "raid_level": "raid5f", 00:22:29.137 "superblock": false, 00:22:29.137 "num_base_bdevs": 4, 00:22:29.137 "num_base_bdevs_discovered": 2, 00:22:29.137 "num_base_bdevs_operational": 4, 00:22:29.137 "base_bdevs_list": [ 00:22:29.137 { 00:22:29.137 "name": "BaseBdev1", 00:22:29.137 "uuid": "ccac4750-6297-43f5-994e-651a3067bf89", 00:22:29.137 "is_configured": true, 00:22:29.137 "data_offset": 0, 00:22:29.137 "data_size": 65536 00:22:29.137 }, 00:22:29.137 { 00:22:29.137 "name": "BaseBdev2", 00:22:29.137 "uuid": "eb7a0af7-417c-4383-a046-8d3579c20dd5", 00:22:29.138 "is_configured": true, 00:22:29.138 "data_offset": 0, 00:22:29.138 "data_size": 65536 00:22:29.138 }, 00:22:29.138 { 00:22:29.138 "name": "BaseBdev3", 00:22:29.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.138 "is_configured": false, 00:22:29.138 "data_offset": 0, 00:22:29.138 "data_size": 0 00:22:29.138 }, 00:22:29.138 { 00:22:29.138 "name": "BaseBdev4", 00:22:29.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.138 "is_configured": false, 00:22:29.138 "data_offset": 0, 00:22:29.138 "data_size": 0 00:22:29.138 } 00:22:29.138 ] 00:22:29.138 }' 00:22:29.138 13:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.138 13:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.705 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:29.705 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.706 [2024-12-06 13:17:16.524892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:29.706 BaseBdev3 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.706 [ 00:22:29.706 { 00:22:29.706 "name": "BaseBdev3", 00:22:29.706 "aliases": [ 00:22:29.706 "98cda52c-c52f-4166-99db-b06c0d2576e4" 00:22:29.706 ], 00:22:29.706 "product_name": "Malloc disk", 00:22:29.706 "block_size": 512, 00:22:29.706 "num_blocks": 65536, 00:22:29.706 "uuid": "98cda52c-c52f-4166-99db-b06c0d2576e4", 00:22:29.706 "assigned_rate_limits": { 00:22:29.706 "rw_ios_per_sec": 0, 00:22:29.706 "rw_mbytes_per_sec": 0, 00:22:29.706 "r_mbytes_per_sec": 0, 00:22:29.706 "w_mbytes_per_sec": 0 00:22:29.706 }, 00:22:29.706 "claimed": true, 00:22:29.706 "claim_type": "exclusive_write", 00:22:29.706 "zoned": false, 00:22:29.706 "supported_io_types": { 00:22:29.706 "read": true, 00:22:29.706 "write": true, 00:22:29.706 "unmap": true, 00:22:29.706 "flush": true, 00:22:29.706 "reset": true, 00:22:29.706 "nvme_admin": false, 00:22:29.706 "nvme_io": false, 00:22:29.706 "nvme_io_md": false, 00:22:29.706 "write_zeroes": true, 00:22:29.706 "zcopy": true, 00:22:29.706 "get_zone_info": false, 00:22:29.706 "zone_management": false, 00:22:29.706 "zone_append": false, 00:22:29.706 "compare": false, 00:22:29.706 "compare_and_write": false, 00:22:29.706 "abort": true, 00:22:29.706 "seek_hole": false, 00:22:29.706 "seek_data": false, 00:22:29.706 "copy": true, 00:22:29.706 "nvme_iov_md": false 00:22:29.706 }, 00:22:29.706 "memory_domains": [ 00:22:29.706 { 00:22:29.706 "dma_device_id": "system", 00:22:29.706 "dma_device_type": 1 00:22:29.706 }, 00:22:29.706 { 00:22:29.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.706 "dma_device_type": 2 00:22:29.706 } 00:22:29.706 ], 00:22:29.706 "driver_specific": {} 00:22:29.706 } 00:22:29.706 ] 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.706 "name": "Existed_Raid", 00:22:29.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.706 "strip_size_kb": 64, 00:22:29.706 "state": "configuring", 00:22:29.706 "raid_level": "raid5f", 00:22:29.706 "superblock": false, 00:22:29.706 "num_base_bdevs": 4, 00:22:29.706 "num_base_bdevs_discovered": 3, 00:22:29.706 "num_base_bdevs_operational": 4, 00:22:29.706 "base_bdevs_list": [ 00:22:29.706 { 00:22:29.706 "name": "BaseBdev1", 00:22:29.706 "uuid": "ccac4750-6297-43f5-994e-651a3067bf89", 00:22:29.706 "is_configured": true, 00:22:29.706 "data_offset": 0, 00:22:29.706 "data_size": 65536 00:22:29.706 }, 00:22:29.706 { 00:22:29.706 "name": "BaseBdev2", 00:22:29.706 "uuid": "eb7a0af7-417c-4383-a046-8d3579c20dd5", 00:22:29.706 "is_configured": true, 00:22:29.706 "data_offset": 0, 00:22:29.706 "data_size": 65536 00:22:29.706 }, 00:22:29.706 { 00:22:29.706 "name": "BaseBdev3", 00:22:29.706 "uuid": "98cda52c-c52f-4166-99db-b06c0d2576e4", 00:22:29.706 "is_configured": true, 00:22:29.706 "data_offset": 0, 00:22:29.706 "data_size": 65536 00:22:29.706 }, 00:22:29.706 { 00:22:29.706 "name": "BaseBdev4", 00:22:29.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.706 "is_configured": false, 00:22:29.706 "data_offset": 0, 00:22:29.706 "data_size": 0 00:22:29.706 } 00:22:29.706 ] 00:22:29.706 }' 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.706 13:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.274 [2024-12-06 13:17:17.155390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:30.274 [2024-12-06 13:17:17.155824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:30.274 [2024-12-06 13:17:17.155858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:30.274 [2024-12-06 13:17:17.156306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:30.274 [2024-12-06 13:17:17.163209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:30.274 [2024-12-06 13:17:17.163403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:30.274 [2024-12-06 13:17:17.164062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.274 BaseBdev4 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.274 [ 00:22:30.274 { 00:22:30.274 "name": "BaseBdev4", 00:22:30.274 "aliases": [ 00:22:30.274 "634a69ad-8898-49fc-845c-5380a4ef688a" 00:22:30.274 ], 00:22:30.274 "product_name": "Malloc disk", 00:22:30.274 "block_size": 512, 00:22:30.274 "num_blocks": 65536, 00:22:30.274 "uuid": "634a69ad-8898-49fc-845c-5380a4ef688a", 00:22:30.274 "assigned_rate_limits": { 00:22:30.274 "rw_ios_per_sec": 0, 00:22:30.274 "rw_mbytes_per_sec": 0, 00:22:30.274 "r_mbytes_per_sec": 0, 00:22:30.274 "w_mbytes_per_sec": 0 00:22:30.274 }, 00:22:30.274 "claimed": true, 00:22:30.274 "claim_type": "exclusive_write", 00:22:30.274 "zoned": false, 00:22:30.274 "supported_io_types": { 00:22:30.274 "read": true, 00:22:30.274 "write": true, 00:22:30.274 "unmap": true, 00:22:30.274 "flush": true, 00:22:30.274 "reset": true, 00:22:30.274 "nvme_admin": false, 00:22:30.274 "nvme_io": false, 00:22:30.274 "nvme_io_md": false, 00:22:30.274 "write_zeroes": true, 00:22:30.274 "zcopy": true, 00:22:30.274 "get_zone_info": false, 00:22:30.274 "zone_management": false, 00:22:30.274 "zone_append": false, 00:22:30.274 "compare": false, 00:22:30.274 "compare_and_write": false, 00:22:30.274 "abort": true, 00:22:30.274 "seek_hole": false, 00:22:30.274 "seek_data": false, 00:22:30.274 "copy": true, 00:22:30.274 "nvme_iov_md": false 00:22:30.274 }, 00:22:30.274 "memory_domains": [ 00:22:30.274 { 00:22:30.274 "dma_device_id": "system", 00:22:30.274 "dma_device_type": 1 00:22:30.274 }, 00:22:30.274 { 00:22:30.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.274 "dma_device_type": 2 00:22:30.274 } 00:22:30.274 ], 00:22:30.274 "driver_specific": {} 00:22:30.274 } 00:22:30.274 ] 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.274 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.274 "name": "Existed_Raid", 00:22:30.274 "uuid": "2cb22e8f-1ab8-4b47-9fc5-ea57439e6dbb", 00:22:30.274 "strip_size_kb": 64, 00:22:30.274 "state": "online", 00:22:30.274 "raid_level": "raid5f", 00:22:30.274 "superblock": false, 00:22:30.274 "num_base_bdevs": 4, 00:22:30.274 "num_base_bdevs_discovered": 4, 00:22:30.274 "num_base_bdevs_operational": 4, 00:22:30.274 "base_bdevs_list": [ 00:22:30.274 { 00:22:30.274 "name": "BaseBdev1", 00:22:30.274 "uuid": "ccac4750-6297-43f5-994e-651a3067bf89", 00:22:30.274 "is_configured": true, 00:22:30.274 "data_offset": 0, 00:22:30.274 "data_size": 65536 00:22:30.274 }, 00:22:30.275 { 00:22:30.275 "name": "BaseBdev2", 00:22:30.275 "uuid": "eb7a0af7-417c-4383-a046-8d3579c20dd5", 00:22:30.275 "is_configured": true, 00:22:30.275 "data_offset": 0, 00:22:30.275 "data_size": 65536 00:22:30.275 }, 00:22:30.275 { 00:22:30.275 "name": "BaseBdev3", 00:22:30.275 "uuid": "98cda52c-c52f-4166-99db-b06c0d2576e4", 00:22:30.275 "is_configured": true, 00:22:30.275 "data_offset": 0, 00:22:30.275 "data_size": 65536 00:22:30.275 }, 00:22:30.275 { 00:22:30.275 "name": "BaseBdev4", 00:22:30.275 "uuid": "634a69ad-8898-49fc-845c-5380a4ef688a", 00:22:30.275 "is_configured": true, 00:22:30.275 "data_offset": 0, 00:22:30.275 "data_size": 65536 00:22:30.275 } 00:22:30.275 ] 00:22:30.275 }' 00:22:30.275 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.275 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:30.844 [2024-12-06 13:17:17.741776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:30.844 "name": "Existed_Raid", 00:22:30.844 "aliases": [ 00:22:30.844 "2cb22e8f-1ab8-4b47-9fc5-ea57439e6dbb" 00:22:30.844 ], 00:22:30.844 "product_name": "Raid Volume", 00:22:30.844 "block_size": 512, 00:22:30.844 "num_blocks": 196608, 00:22:30.844 "uuid": "2cb22e8f-1ab8-4b47-9fc5-ea57439e6dbb", 00:22:30.844 "assigned_rate_limits": { 00:22:30.844 "rw_ios_per_sec": 0, 00:22:30.844 "rw_mbytes_per_sec": 0, 00:22:30.844 "r_mbytes_per_sec": 0, 00:22:30.844 "w_mbytes_per_sec": 0 00:22:30.844 }, 00:22:30.844 "claimed": false, 00:22:30.844 "zoned": false, 00:22:30.844 "supported_io_types": { 00:22:30.844 "read": true, 00:22:30.844 "write": true, 00:22:30.844 "unmap": false, 00:22:30.844 "flush": false, 00:22:30.844 "reset": true, 00:22:30.844 "nvme_admin": false, 00:22:30.844 "nvme_io": false, 00:22:30.844 "nvme_io_md": false, 00:22:30.844 "write_zeroes": true, 00:22:30.844 "zcopy": false, 00:22:30.844 "get_zone_info": false, 00:22:30.844 "zone_management": false, 00:22:30.844 "zone_append": false, 00:22:30.844 "compare": false, 00:22:30.844 "compare_and_write": false, 00:22:30.844 "abort": false, 00:22:30.844 "seek_hole": false, 00:22:30.844 "seek_data": false, 00:22:30.844 "copy": false, 00:22:30.844 "nvme_iov_md": false 00:22:30.844 }, 00:22:30.844 "driver_specific": { 00:22:30.844 "raid": { 00:22:30.844 "uuid": "2cb22e8f-1ab8-4b47-9fc5-ea57439e6dbb", 00:22:30.844 "strip_size_kb": 64, 00:22:30.844 "state": "online", 00:22:30.844 "raid_level": "raid5f", 00:22:30.844 "superblock": false, 00:22:30.844 "num_base_bdevs": 4, 00:22:30.844 "num_base_bdevs_discovered": 4, 00:22:30.844 "num_base_bdevs_operational": 4, 00:22:30.844 "base_bdevs_list": [ 00:22:30.844 { 00:22:30.844 "name": "BaseBdev1", 00:22:30.844 "uuid": "ccac4750-6297-43f5-994e-651a3067bf89", 00:22:30.844 "is_configured": true, 00:22:30.844 "data_offset": 0, 00:22:30.844 "data_size": 65536 00:22:30.844 }, 00:22:30.844 { 00:22:30.844 "name": "BaseBdev2", 00:22:30.844 "uuid": "eb7a0af7-417c-4383-a046-8d3579c20dd5", 00:22:30.844 "is_configured": true, 00:22:30.844 "data_offset": 0, 00:22:30.844 "data_size": 65536 00:22:30.844 }, 00:22:30.844 { 00:22:30.844 "name": "BaseBdev3", 00:22:30.844 "uuid": "98cda52c-c52f-4166-99db-b06c0d2576e4", 00:22:30.844 "is_configured": true, 00:22:30.844 "data_offset": 0, 00:22:30.844 "data_size": 65536 00:22:30.844 }, 00:22:30.844 { 00:22:30.844 "name": "BaseBdev4", 00:22:30.844 "uuid": "634a69ad-8898-49fc-845c-5380a4ef688a", 00:22:30.844 "is_configured": true, 00:22:30.844 "data_offset": 0, 00:22:30.844 "data_size": 65536 00:22:30.844 } 00:22:30.844 ] 00:22:30.844 } 00:22:30.844 } 00:22:30.844 }' 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:30.844 BaseBdev2 00:22:30.844 BaseBdev3 00:22:30.844 BaseBdev4' 00:22:30.844 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.109 13:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.109 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.109 [2024-12-06 13:17:18.109656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.378 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.378 "name": "Existed_Raid", 00:22:31.378 "uuid": "2cb22e8f-1ab8-4b47-9fc5-ea57439e6dbb", 00:22:31.378 "strip_size_kb": 64, 00:22:31.378 "state": "online", 00:22:31.378 "raid_level": "raid5f", 00:22:31.379 "superblock": false, 00:22:31.379 "num_base_bdevs": 4, 00:22:31.379 "num_base_bdevs_discovered": 3, 00:22:31.379 "num_base_bdevs_operational": 3, 00:22:31.379 "base_bdevs_list": [ 00:22:31.379 { 00:22:31.379 "name": null, 00:22:31.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.379 "is_configured": false, 00:22:31.379 "data_offset": 0, 00:22:31.379 "data_size": 65536 00:22:31.379 }, 00:22:31.379 { 00:22:31.379 "name": "BaseBdev2", 00:22:31.379 "uuid": "eb7a0af7-417c-4383-a046-8d3579c20dd5", 00:22:31.379 "is_configured": true, 00:22:31.379 "data_offset": 0, 00:22:31.379 "data_size": 65536 00:22:31.379 }, 00:22:31.379 { 00:22:31.379 "name": "BaseBdev3", 00:22:31.379 "uuid": "98cda52c-c52f-4166-99db-b06c0d2576e4", 00:22:31.379 "is_configured": true, 00:22:31.379 "data_offset": 0, 00:22:31.379 "data_size": 65536 00:22:31.379 }, 00:22:31.379 { 00:22:31.379 "name": "BaseBdev4", 00:22:31.379 "uuid": "634a69ad-8898-49fc-845c-5380a4ef688a", 00:22:31.379 "is_configured": true, 00:22:31.379 "data_offset": 0, 00:22:31.379 "data_size": 65536 00:22:31.379 } 00:22:31.379 ] 00:22:31.379 }' 00:22:31.379 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.379 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.947 [2024-12-06 13:17:18.772895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:31.947 [2024-12-06 13:17:18.773055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:31.947 [2024-12-06 13:17:18.863579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.947 13:17:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.947 [2024-12-06 13:17:18.923604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:32.205 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.205 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:32.205 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:32.205 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.206 [2024-12-06 13:17:19.075290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:32.206 [2024-12-06 13:17:19.075506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.206 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.465 BaseBdev2 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.465 [ 00:22:32.465 { 00:22:32.465 "name": "BaseBdev2", 00:22:32.465 "aliases": [ 00:22:32.465 "dffb4dc2-bf2d-4df0-8ad1-435075e13d37" 00:22:32.465 ], 00:22:32.465 "product_name": "Malloc disk", 00:22:32.465 "block_size": 512, 00:22:32.465 "num_blocks": 65536, 00:22:32.465 "uuid": "dffb4dc2-bf2d-4df0-8ad1-435075e13d37", 00:22:32.465 "assigned_rate_limits": { 00:22:32.465 "rw_ios_per_sec": 0, 00:22:32.465 "rw_mbytes_per_sec": 0, 00:22:32.465 "r_mbytes_per_sec": 0, 00:22:32.465 "w_mbytes_per_sec": 0 00:22:32.465 }, 00:22:32.465 "claimed": false, 00:22:32.465 "zoned": false, 00:22:32.465 "supported_io_types": { 00:22:32.465 "read": true, 00:22:32.465 "write": true, 00:22:32.465 "unmap": true, 00:22:32.465 "flush": true, 00:22:32.465 "reset": true, 00:22:32.465 "nvme_admin": false, 00:22:32.465 "nvme_io": false, 00:22:32.465 "nvme_io_md": false, 00:22:32.465 "write_zeroes": true, 00:22:32.465 "zcopy": true, 00:22:32.465 "get_zone_info": false, 00:22:32.465 "zone_management": false, 00:22:32.465 "zone_append": false, 00:22:32.465 "compare": false, 00:22:32.465 "compare_and_write": false, 00:22:32.465 "abort": true, 00:22:32.465 "seek_hole": false, 00:22:32.465 "seek_data": false, 00:22:32.465 "copy": true, 00:22:32.465 "nvme_iov_md": false 00:22:32.465 }, 00:22:32.465 "memory_domains": [ 00:22:32.465 { 00:22:32.465 "dma_device_id": "system", 00:22:32.465 "dma_device_type": 1 00:22:32.465 }, 00:22:32.465 { 00:22:32.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.465 "dma_device_type": 2 00:22:32.465 } 00:22:32.465 ], 00:22:32.465 "driver_specific": {} 00:22:32.465 } 00:22:32.465 ] 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.465 BaseBdev3 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:32.465 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.466 [ 00:22:32.466 { 00:22:32.466 "name": "BaseBdev3", 00:22:32.466 "aliases": [ 00:22:32.466 "95696c2a-ca5d-4bf4-b33a-033d11a5dad7" 00:22:32.466 ], 00:22:32.466 "product_name": "Malloc disk", 00:22:32.466 "block_size": 512, 00:22:32.466 "num_blocks": 65536, 00:22:32.466 "uuid": "95696c2a-ca5d-4bf4-b33a-033d11a5dad7", 00:22:32.466 "assigned_rate_limits": { 00:22:32.466 "rw_ios_per_sec": 0, 00:22:32.466 "rw_mbytes_per_sec": 0, 00:22:32.466 "r_mbytes_per_sec": 0, 00:22:32.466 "w_mbytes_per_sec": 0 00:22:32.466 }, 00:22:32.466 "claimed": false, 00:22:32.466 "zoned": false, 00:22:32.466 "supported_io_types": { 00:22:32.466 "read": true, 00:22:32.466 "write": true, 00:22:32.466 "unmap": true, 00:22:32.466 "flush": true, 00:22:32.466 "reset": true, 00:22:32.466 "nvme_admin": false, 00:22:32.466 "nvme_io": false, 00:22:32.466 "nvme_io_md": false, 00:22:32.466 "write_zeroes": true, 00:22:32.466 "zcopy": true, 00:22:32.466 "get_zone_info": false, 00:22:32.466 "zone_management": false, 00:22:32.466 "zone_append": false, 00:22:32.466 "compare": false, 00:22:32.466 "compare_and_write": false, 00:22:32.466 "abort": true, 00:22:32.466 "seek_hole": false, 00:22:32.466 "seek_data": false, 00:22:32.466 "copy": true, 00:22:32.466 "nvme_iov_md": false 00:22:32.466 }, 00:22:32.466 "memory_domains": [ 00:22:32.466 { 00:22:32.466 "dma_device_id": "system", 00:22:32.466 "dma_device_type": 1 00:22:32.466 }, 00:22:32.466 { 00:22:32.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.466 "dma_device_type": 2 00:22:32.466 } 00:22:32.466 ], 00:22:32.466 "driver_specific": {} 00:22:32.466 } 00:22:32.466 ] 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.466 BaseBdev4 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.466 [ 00:22:32.466 { 00:22:32.466 "name": "BaseBdev4", 00:22:32.466 "aliases": [ 00:22:32.466 "796e39c8-0193-4d24-b81b-8639689a2ed8" 00:22:32.466 ], 00:22:32.466 "product_name": "Malloc disk", 00:22:32.466 "block_size": 512, 00:22:32.466 "num_blocks": 65536, 00:22:32.466 "uuid": "796e39c8-0193-4d24-b81b-8639689a2ed8", 00:22:32.466 "assigned_rate_limits": { 00:22:32.466 "rw_ios_per_sec": 0, 00:22:32.466 "rw_mbytes_per_sec": 0, 00:22:32.466 "r_mbytes_per_sec": 0, 00:22:32.466 "w_mbytes_per_sec": 0 00:22:32.466 }, 00:22:32.466 "claimed": false, 00:22:32.466 "zoned": false, 00:22:32.466 "supported_io_types": { 00:22:32.466 "read": true, 00:22:32.466 "write": true, 00:22:32.466 "unmap": true, 00:22:32.466 "flush": true, 00:22:32.466 "reset": true, 00:22:32.466 "nvme_admin": false, 00:22:32.466 "nvme_io": false, 00:22:32.466 "nvme_io_md": false, 00:22:32.466 "write_zeroes": true, 00:22:32.466 "zcopy": true, 00:22:32.466 "get_zone_info": false, 00:22:32.466 "zone_management": false, 00:22:32.466 "zone_append": false, 00:22:32.466 "compare": false, 00:22:32.466 "compare_and_write": false, 00:22:32.466 "abort": true, 00:22:32.466 "seek_hole": false, 00:22:32.466 "seek_data": false, 00:22:32.466 "copy": true, 00:22:32.466 "nvme_iov_md": false 00:22:32.466 }, 00:22:32.466 "memory_domains": [ 00:22:32.466 { 00:22:32.466 "dma_device_id": "system", 00:22:32.466 "dma_device_type": 1 00:22:32.466 }, 00:22:32.466 { 00:22:32.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.466 "dma_device_type": 2 00:22:32.466 } 00:22:32.466 ], 00:22:32.466 "driver_specific": {} 00:22:32.466 } 00:22:32.466 ] 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.466 [2024-12-06 13:17:19.456530] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:32.466 [2024-12-06 13:17:19.456588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:32.466 [2024-12-06 13:17:19.456620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:32.466 [2024-12-06 13:17:19.459079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:32.466 [2024-12-06 13:17:19.459347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.466 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.726 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.726 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.726 "name": "Existed_Raid", 00:22:32.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.726 "strip_size_kb": 64, 00:22:32.726 "state": "configuring", 00:22:32.726 "raid_level": "raid5f", 00:22:32.726 "superblock": false, 00:22:32.726 "num_base_bdevs": 4, 00:22:32.726 "num_base_bdevs_discovered": 3, 00:22:32.726 "num_base_bdevs_operational": 4, 00:22:32.726 "base_bdevs_list": [ 00:22:32.726 { 00:22:32.726 "name": "BaseBdev1", 00:22:32.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.726 "is_configured": false, 00:22:32.726 "data_offset": 0, 00:22:32.726 "data_size": 0 00:22:32.726 }, 00:22:32.726 { 00:22:32.726 "name": "BaseBdev2", 00:22:32.726 "uuid": "dffb4dc2-bf2d-4df0-8ad1-435075e13d37", 00:22:32.726 "is_configured": true, 00:22:32.726 "data_offset": 0, 00:22:32.726 "data_size": 65536 00:22:32.726 }, 00:22:32.726 { 00:22:32.726 "name": "BaseBdev3", 00:22:32.726 "uuid": "95696c2a-ca5d-4bf4-b33a-033d11a5dad7", 00:22:32.726 "is_configured": true, 00:22:32.726 "data_offset": 0, 00:22:32.726 "data_size": 65536 00:22:32.726 }, 00:22:32.726 { 00:22:32.726 "name": "BaseBdev4", 00:22:32.726 "uuid": "796e39c8-0193-4d24-b81b-8639689a2ed8", 00:22:32.726 "is_configured": true, 00:22:32.726 "data_offset": 0, 00:22:32.726 "data_size": 65536 00:22:32.726 } 00:22:32.726 ] 00:22:32.726 }' 00:22:32.726 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.726 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.985 [2024-12-06 13:17:19.964735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.985 13:17:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.243 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.243 "name": "Existed_Raid", 00:22:33.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.243 "strip_size_kb": 64, 00:22:33.243 "state": "configuring", 00:22:33.243 "raid_level": "raid5f", 00:22:33.243 "superblock": false, 00:22:33.243 "num_base_bdevs": 4, 00:22:33.243 "num_base_bdevs_discovered": 2, 00:22:33.243 "num_base_bdevs_operational": 4, 00:22:33.243 "base_bdevs_list": [ 00:22:33.243 { 00:22:33.243 "name": "BaseBdev1", 00:22:33.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.243 "is_configured": false, 00:22:33.243 "data_offset": 0, 00:22:33.243 "data_size": 0 00:22:33.243 }, 00:22:33.243 { 00:22:33.243 "name": null, 00:22:33.243 "uuid": "dffb4dc2-bf2d-4df0-8ad1-435075e13d37", 00:22:33.243 "is_configured": false, 00:22:33.243 "data_offset": 0, 00:22:33.243 "data_size": 65536 00:22:33.243 }, 00:22:33.243 { 00:22:33.243 "name": "BaseBdev3", 00:22:33.243 "uuid": "95696c2a-ca5d-4bf4-b33a-033d11a5dad7", 00:22:33.243 "is_configured": true, 00:22:33.243 "data_offset": 0, 00:22:33.243 "data_size": 65536 00:22:33.243 }, 00:22:33.243 { 00:22:33.243 "name": "BaseBdev4", 00:22:33.243 "uuid": "796e39c8-0193-4d24-b81b-8639689a2ed8", 00:22:33.243 "is_configured": true, 00:22:33.243 "data_offset": 0, 00:22:33.243 "data_size": 65536 00:22:33.243 } 00:22:33.243 ] 00:22:33.243 }' 00:22:33.243 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.243 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.502 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.502 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:33.502 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.502 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.502 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.761 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.762 [2024-12-06 13:17:20.570833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:33.762 BaseBdev1 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.762 [ 00:22:33.762 { 00:22:33.762 "name": "BaseBdev1", 00:22:33.762 "aliases": [ 00:22:33.762 "452b2aed-be21-4eb0-a070-e6cc4db0d4d7" 00:22:33.762 ], 00:22:33.762 "product_name": "Malloc disk", 00:22:33.762 "block_size": 512, 00:22:33.762 "num_blocks": 65536, 00:22:33.762 "uuid": "452b2aed-be21-4eb0-a070-e6cc4db0d4d7", 00:22:33.762 "assigned_rate_limits": { 00:22:33.762 "rw_ios_per_sec": 0, 00:22:33.762 "rw_mbytes_per_sec": 0, 00:22:33.762 "r_mbytes_per_sec": 0, 00:22:33.762 "w_mbytes_per_sec": 0 00:22:33.762 }, 00:22:33.762 "claimed": true, 00:22:33.762 "claim_type": "exclusive_write", 00:22:33.762 "zoned": false, 00:22:33.762 "supported_io_types": { 00:22:33.762 "read": true, 00:22:33.762 "write": true, 00:22:33.762 "unmap": true, 00:22:33.762 "flush": true, 00:22:33.762 "reset": true, 00:22:33.762 "nvme_admin": false, 00:22:33.762 "nvme_io": false, 00:22:33.762 "nvme_io_md": false, 00:22:33.762 "write_zeroes": true, 00:22:33.762 "zcopy": true, 00:22:33.762 "get_zone_info": false, 00:22:33.762 "zone_management": false, 00:22:33.762 "zone_append": false, 00:22:33.762 "compare": false, 00:22:33.762 "compare_and_write": false, 00:22:33.762 "abort": true, 00:22:33.762 "seek_hole": false, 00:22:33.762 "seek_data": false, 00:22:33.762 "copy": true, 00:22:33.762 "nvme_iov_md": false 00:22:33.762 }, 00:22:33.762 "memory_domains": [ 00:22:33.762 { 00:22:33.762 "dma_device_id": "system", 00:22:33.762 "dma_device_type": 1 00:22:33.762 }, 00:22:33.762 { 00:22:33.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.762 "dma_device_type": 2 00:22:33.762 } 00:22:33.762 ], 00:22:33.762 "driver_specific": {} 00:22:33.762 } 00:22:33.762 ] 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.762 "name": "Existed_Raid", 00:22:33.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.762 "strip_size_kb": 64, 00:22:33.762 "state": "configuring", 00:22:33.762 "raid_level": "raid5f", 00:22:33.762 "superblock": false, 00:22:33.762 "num_base_bdevs": 4, 00:22:33.762 "num_base_bdevs_discovered": 3, 00:22:33.762 "num_base_bdevs_operational": 4, 00:22:33.762 "base_bdevs_list": [ 00:22:33.762 { 00:22:33.762 "name": "BaseBdev1", 00:22:33.762 "uuid": "452b2aed-be21-4eb0-a070-e6cc4db0d4d7", 00:22:33.762 "is_configured": true, 00:22:33.762 "data_offset": 0, 00:22:33.762 "data_size": 65536 00:22:33.762 }, 00:22:33.762 { 00:22:33.762 "name": null, 00:22:33.762 "uuid": "dffb4dc2-bf2d-4df0-8ad1-435075e13d37", 00:22:33.762 "is_configured": false, 00:22:33.762 "data_offset": 0, 00:22:33.762 "data_size": 65536 00:22:33.762 }, 00:22:33.762 { 00:22:33.762 "name": "BaseBdev3", 00:22:33.762 "uuid": "95696c2a-ca5d-4bf4-b33a-033d11a5dad7", 00:22:33.762 "is_configured": true, 00:22:33.762 "data_offset": 0, 00:22:33.762 "data_size": 65536 00:22:33.762 }, 00:22:33.762 { 00:22:33.762 "name": "BaseBdev4", 00:22:33.762 "uuid": "796e39c8-0193-4d24-b81b-8639689a2ed8", 00:22:33.762 "is_configured": true, 00:22:33.762 "data_offset": 0, 00:22:33.762 "data_size": 65536 00:22:33.762 } 00:22:33.762 ] 00:22:33.762 }' 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.762 13:17:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.330 [2024-12-06 13:17:21.191184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.330 "name": "Existed_Raid", 00:22:34.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.330 "strip_size_kb": 64, 00:22:34.330 "state": "configuring", 00:22:34.330 "raid_level": "raid5f", 00:22:34.330 "superblock": false, 00:22:34.330 "num_base_bdevs": 4, 00:22:34.330 "num_base_bdevs_discovered": 2, 00:22:34.330 "num_base_bdevs_operational": 4, 00:22:34.330 "base_bdevs_list": [ 00:22:34.330 { 00:22:34.330 "name": "BaseBdev1", 00:22:34.330 "uuid": "452b2aed-be21-4eb0-a070-e6cc4db0d4d7", 00:22:34.330 "is_configured": true, 00:22:34.330 "data_offset": 0, 00:22:34.330 "data_size": 65536 00:22:34.330 }, 00:22:34.330 { 00:22:34.330 "name": null, 00:22:34.330 "uuid": "dffb4dc2-bf2d-4df0-8ad1-435075e13d37", 00:22:34.330 "is_configured": false, 00:22:34.330 "data_offset": 0, 00:22:34.330 "data_size": 65536 00:22:34.330 }, 00:22:34.330 { 00:22:34.330 "name": null, 00:22:34.330 "uuid": "95696c2a-ca5d-4bf4-b33a-033d11a5dad7", 00:22:34.330 "is_configured": false, 00:22:34.330 "data_offset": 0, 00:22:34.330 "data_size": 65536 00:22:34.330 }, 00:22:34.330 { 00:22:34.330 "name": "BaseBdev4", 00:22:34.330 "uuid": "796e39c8-0193-4d24-b81b-8639689a2ed8", 00:22:34.330 "is_configured": true, 00:22:34.330 "data_offset": 0, 00:22:34.330 "data_size": 65536 00:22:34.330 } 00:22:34.330 ] 00:22:34.330 }' 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.330 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.898 [2024-12-06 13:17:21.791292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.898 "name": "Existed_Raid", 00:22:34.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.898 "strip_size_kb": 64, 00:22:34.898 "state": "configuring", 00:22:34.898 "raid_level": "raid5f", 00:22:34.898 "superblock": false, 00:22:34.898 "num_base_bdevs": 4, 00:22:34.898 "num_base_bdevs_discovered": 3, 00:22:34.898 "num_base_bdevs_operational": 4, 00:22:34.898 "base_bdevs_list": [ 00:22:34.898 { 00:22:34.898 "name": "BaseBdev1", 00:22:34.898 "uuid": "452b2aed-be21-4eb0-a070-e6cc4db0d4d7", 00:22:34.898 "is_configured": true, 00:22:34.898 "data_offset": 0, 00:22:34.898 "data_size": 65536 00:22:34.898 }, 00:22:34.898 { 00:22:34.898 "name": null, 00:22:34.898 "uuid": "dffb4dc2-bf2d-4df0-8ad1-435075e13d37", 00:22:34.898 "is_configured": false, 00:22:34.898 "data_offset": 0, 00:22:34.898 "data_size": 65536 00:22:34.898 }, 00:22:34.898 { 00:22:34.898 "name": "BaseBdev3", 00:22:34.898 "uuid": "95696c2a-ca5d-4bf4-b33a-033d11a5dad7", 00:22:34.898 "is_configured": true, 00:22:34.898 "data_offset": 0, 00:22:34.898 "data_size": 65536 00:22:34.898 }, 00:22:34.898 { 00:22:34.898 "name": "BaseBdev4", 00:22:34.898 "uuid": "796e39c8-0193-4d24-b81b-8639689a2ed8", 00:22:34.898 "is_configured": true, 00:22:34.898 "data_offset": 0, 00:22:34.898 "data_size": 65536 00:22:34.898 } 00:22:34.898 ] 00:22:34.898 }' 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.898 13:17:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.466 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:35.466 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.466 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.466 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.466 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.466 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:35.466 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:35.466 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.466 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.466 [2024-12-06 13:17:22.427648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.725 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.725 "name": "Existed_Raid", 00:22:35.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.725 "strip_size_kb": 64, 00:22:35.725 "state": "configuring", 00:22:35.725 "raid_level": "raid5f", 00:22:35.725 "superblock": false, 00:22:35.725 "num_base_bdevs": 4, 00:22:35.725 "num_base_bdevs_discovered": 2, 00:22:35.725 "num_base_bdevs_operational": 4, 00:22:35.725 "base_bdevs_list": [ 00:22:35.725 { 00:22:35.725 "name": null, 00:22:35.725 "uuid": "452b2aed-be21-4eb0-a070-e6cc4db0d4d7", 00:22:35.725 "is_configured": false, 00:22:35.725 "data_offset": 0, 00:22:35.726 "data_size": 65536 00:22:35.726 }, 00:22:35.726 { 00:22:35.726 "name": null, 00:22:35.726 "uuid": "dffb4dc2-bf2d-4df0-8ad1-435075e13d37", 00:22:35.726 "is_configured": false, 00:22:35.726 "data_offset": 0, 00:22:35.726 "data_size": 65536 00:22:35.726 }, 00:22:35.726 { 00:22:35.726 "name": "BaseBdev3", 00:22:35.726 "uuid": "95696c2a-ca5d-4bf4-b33a-033d11a5dad7", 00:22:35.726 "is_configured": true, 00:22:35.726 "data_offset": 0, 00:22:35.726 "data_size": 65536 00:22:35.726 }, 00:22:35.726 { 00:22:35.726 "name": "BaseBdev4", 00:22:35.726 "uuid": "796e39c8-0193-4d24-b81b-8639689a2ed8", 00:22:35.726 "is_configured": true, 00:22:35.726 "data_offset": 0, 00:22:35.726 "data_size": 65536 00:22:35.726 } 00:22:35.726 ] 00:22:35.726 }' 00:22:35.726 13:17:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.726 13:17:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.335 [2024-12-06 13:17:23.090679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.335 "name": "Existed_Raid", 00:22:36.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.335 "strip_size_kb": 64, 00:22:36.335 "state": "configuring", 00:22:36.335 "raid_level": "raid5f", 00:22:36.335 "superblock": false, 00:22:36.335 "num_base_bdevs": 4, 00:22:36.335 "num_base_bdevs_discovered": 3, 00:22:36.335 "num_base_bdevs_operational": 4, 00:22:36.335 "base_bdevs_list": [ 00:22:36.335 { 00:22:36.335 "name": null, 00:22:36.335 "uuid": "452b2aed-be21-4eb0-a070-e6cc4db0d4d7", 00:22:36.335 "is_configured": false, 00:22:36.335 "data_offset": 0, 00:22:36.335 "data_size": 65536 00:22:36.335 }, 00:22:36.335 { 00:22:36.335 "name": "BaseBdev2", 00:22:36.335 "uuid": "dffb4dc2-bf2d-4df0-8ad1-435075e13d37", 00:22:36.335 "is_configured": true, 00:22:36.335 "data_offset": 0, 00:22:36.335 "data_size": 65536 00:22:36.335 }, 00:22:36.335 { 00:22:36.335 "name": "BaseBdev3", 00:22:36.335 "uuid": "95696c2a-ca5d-4bf4-b33a-033d11a5dad7", 00:22:36.335 "is_configured": true, 00:22:36.335 "data_offset": 0, 00:22:36.335 "data_size": 65536 00:22:36.335 }, 00:22:36.335 { 00:22:36.335 "name": "BaseBdev4", 00:22:36.335 "uuid": "796e39c8-0193-4d24-b81b-8639689a2ed8", 00:22:36.335 "is_configured": true, 00:22:36.335 "data_offset": 0, 00:22:36.335 "data_size": 65536 00:22:36.335 } 00:22:36.335 ] 00:22:36.335 }' 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.335 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.593 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.593 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.593 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.593 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:36.853 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 452b2aed-be21-4eb0-a070-e6cc4db0d4d7 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.854 [2024-12-06 13:17:23.748560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:36.854 [2024-12-06 13:17:23.748651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:36.854 [2024-12-06 13:17:23.748665] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:36.854 [2024-12-06 13:17:23.749017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:36.854 [2024-12-06 13:17:23.755144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:36.854 [2024-12-06 13:17:23.755178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:36.854 [2024-12-06 13:17:23.755567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.854 NewBaseBdev 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.854 [ 00:22:36.854 { 00:22:36.854 "name": "NewBaseBdev", 00:22:36.854 "aliases": [ 00:22:36.854 "452b2aed-be21-4eb0-a070-e6cc4db0d4d7" 00:22:36.854 ], 00:22:36.854 "product_name": "Malloc disk", 00:22:36.854 "block_size": 512, 00:22:36.854 "num_blocks": 65536, 00:22:36.854 "uuid": "452b2aed-be21-4eb0-a070-e6cc4db0d4d7", 00:22:36.854 "assigned_rate_limits": { 00:22:36.854 "rw_ios_per_sec": 0, 00:22:36.854 "rw_mbytes_per_sec": 0, 00:22:36.854 "r_mbytes_per_sec": 0, 00:22:36.854 "w_mbytes_per_sec": 0 00:22:36.854 }, 00:22:36.854 "claimed": true, 00:22:36.854 "claim_type": "exclusive_write", 00:22:36.854 "zoned": false, 00:22:36.854 "supported_io_types": { 00:22:36.854 "read": true, 00:22:36.854 "write": true, 00:22:36.854 "unmap": true, 00:22:36.854 "flush": true, 00:22:36.854 "reset": true, 00:22:36.854 "nvme_admin": false, 00:22:36.854 "nvme_io": false, 00:22:36.854 "nvme_io_md": false, 00:22:36.854 "write_zeroes": true, 00:22:36.854 "zcopy": true, 00:22:36.854 "get_zone_info": false, 00:22:36.854 "zone_management": false, 00:22:36.854 "zone_append": false, 00:22:36.854 "compare": false, 00:22:36.854 "compare_and_write": false, 00:22:36.854 "abort": true, 00:22:36.854 "seek_hole": false, 00:22:36.854 "seek_data": false, 00:22:36.854 "copy": true, 00:22:36.854 "nvme_iov_md": false 00:22:36.854 }, 00:22:36.854 "memory_domains": [ 00:22:36.854 { 00:22:36.854 "dma_device_id": "system", 00:22:36.854 "dma_device_type": 1 00:22:36.854 }, 00:22:36.854 { 00:22:36.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.854 "dma_device_type": 2 00:22:36.854 } 00:22:36.854 ], 00:22:36.854 "driver_specific": {} 00:22:36.854 } 00:22:36.854 ] 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.854 "name": "Existed_Raid", 00:22:36.854 "uuid": "75933124-1c28-44dd-b3f0-d4ac1fc16dfa", 00:22:36.854 "strip_size_kb": 64, 00:22:36.854 "state": "online", 00:22:36.854 "raid_level": "raid5f", 00:22:36.854 "superblock": false, 00:22:36.854 "num_base_bdevs": 4, 00:22:36.854 "num_base_bdevs_discovered": 4, 00:22:36.854 "num_base_bdevs_operational": 4, 00:22:36.854 "base_bdevs_list": [ 00:22:36.854 { 00:22:36.854 "name": "NewBaseBdev", 00:22:36.854 "uuid": "452b2aed-be21-4eb0-a070-e6cc4db0d4d7", 00:22:36.854 "is_configured": true, 00:22:36.854 "data_offset": 0, 00:22:36.854 "data_size": 65536 00:22:36.854 }, 00:22:36.854 { 00:22:36.854 "name": "BaseBdev2", 00:22:36.854 "uuid": "dffb4dc2-bf2d-4df0-8ad1-435075e13d37", 00:22:36.854 "is_configured": true, 00:22:36.854 "data_offset": 0, 00:22:36.854 "data_size": 65536 00:22:36.854 }, 00:22:36.854 { 00:22:36.854 "name": "BaseBdev3", 00:22:36.854 "uuid": "95696c2a-ca5d-4bf4-b33a-033d11a5dad7", 00:22:36.854 "is_configured": true, 00:22:36.854 "data_offset": 0, 00:22:36.854 "data_size": 65536 00:22:36.854 }, 00:22:36.854 { 00:22:36.854 "name": "BaseBdev4", 00:22:36.854 "uuid": "796e39c8-0193-4d24-b81b-8639689a2ed8", 00:22:36.854 "is_configured": true, 00:22:36.854 "data_offset": 0, 00:22:36.854 "data_size": 65536 00:22:36.854 } 00:22:36.854 ] 00:22:36.854 }' 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.854 13:17:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.424 [2024-12-06 13:17:24.351884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:37.424 "name": "Existed_Raid", 00:22:37.424 "aliases": [ 00:22:37.424 "75933124-1c28-44dd-b3f0-d4ac1fc16dfa" 00:22:37.424 ], 00:22:37.424 "product_name": "Raid Volume", 00:22:37.424 "block_size": 512, 00:22:37.424 "num_blocks": 196608, 00:22:37.424 "uuid": "75933124-1c28-44dd-b3f0-d4ac1fc16dfa", 00:22:37.424 "assigned_rate_limits": { 00:22:37.424 "rw_ios_per_sec": 0, 00:22:37.424 "rw_mbytes_per_sec": 0, 00:22:37.424 "r_mbytes_per_sec": 0, 00:22:37.424 "w_mbytes_per_sec": 0 00:22:37.424 }, 00:22:37.424 "claimed": false, 00:22:37.424 "zoned": false, 00:22:37.424 "supported_io_types": { 00:22:37.424 "read": true, 00:22:37.424 "write": true, 00:22:37.424 "unmap": false, 00:22:37.424 "flush": false, 00:22:37.424 "reset": true, 00:22:37.424 "nvme_admin": false, 00:22:37.424 "nvme_io": false, 00:22:37.424 "nvme_io_md": false, 00:22:37.424 "write_zeroes": true, 00:22:37.424 "zcopy": false, 00:22:37.424 "get_zone_info": false, 00:22:37.424 "zone_management": false, 00:22:37.424 "zone_append": false, 00:22:37.424 "compare": false, 00:22:37.424 "compare_and_write": false, 00:22:37.424 "abort": false, 00:22:37.424 "seek_hole": false, 00:22:37.424 "seek_data": false, 00:22:37.424 "copy": false, 00:22:37.424 "nvme_iov_md": false 00:22:37.424 }, 00:22:37.424 "driver_specific": { 00:22:37.424 "raid": { 00:22:37.424 "uuid": "75933124-1c28-44dd-b3f0-d4ac1fc16dfa", 00:22:37.424 "strip_size_kb": 64, 00:22:37.424 "state": "online", 00:22:37.424 "raid_level": "raid5f", 00:22:37.424 "superblock": false, 00:22:37.424 "num_base_bdevs": 4, 00:22:37.424 "num_base_bdevs_discovered": 4, 00:22:37.424 "num_base_bdevs_operational": 4, 00:22:37.424 "base_bdevs_list": [ 00:22:37.424 { 00:22:37.424 "name": "NewBaseBdev", 00:22:37.424 "uuid": "452b2aed-be21-4eb0-a070-e6cc4db0d4d7", 00:22:37.424 "is_configured": true, 00:22:37.424 "data_offset": 0, 00:22:37.424 "data_size": 65536 00:22:37.424 }, 00:22:37.424 { 00:22:37.424 "name": "BaseBdev2", 00:22:37.424 "uuid": "dffb4dc2-bf2d-4df0-8ad1-435075e13d37", 00:22:37.424 "is_configured": true, 00:22:37.424 "data_offset": 0, 00:22:37.424 "data_size": 65536 00:22:37.424 }, 00:22:37.424 { 00:22:37.424 "name": "BaseBdev3", 00:22:37.424 "uuid": "95696c2a-ca5d-4bf4-b33a-033d11a5dad7", 00:22:37.424 "is_configured": true, 00:22:37.424 "data_offset": 0, 00:22:37.424 "data_size": 65536 00:22:37.424 }, 00:22:37.424 { 00:22:37.424 "name": "BaseBdev4", 00:22:37.424 "uuid": "796e39c8-0193-4d24-b81b-8639689a2ed8", 00:22:37.424 "is_configured": true, 00:22:37.424 "data_offset": 0, 00:22:37.424 "data_size": 65536 00:22:37.424 } 00:22:37.424 ] 00:22:37.424 } 00:22:37.424 } 00:22:37.424 }' 00:22:37.424 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:37.683 BaseBdev2 00:22:37.683 BaseBdev3 00:22:37.683 BaseBdev4' 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:37.683 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:37.684 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.684 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.684 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.942 [2024-12-06 13:17:24.735624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:37.942 [2024-12-06 13:17:24.735668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.942 [2024-12-06 13:17:24.735774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.942 [2024-12-06 13:17:24.736188] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.942 [2024-12-06 13:17:24.736215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83514 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83514 ']' 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83514 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83514 00:22:37.942 killing process with pid 83514 00:22:37.942 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:37.943 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:37.943 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83514' 00:22:37.943 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83514 00:22:37.943 [2024-12-06 13:17:24.774993] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:37.943 13:17:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83514 00:22:38.209 [2024-12-06 13:17:25.130545] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:39.589 00:22:39.589 real 0m13.214s 00:22:39.589 user 0m21.648s 00:22:39.589 sys 0m2.045s 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.589 ************************************ 00:22:39.589 END TEST raid5f_state_function_test 00:22:39.589 ************************************ 00:22:39.589 13:17:26 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:22:39.589 13:17:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:39.589 13:17:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.589 13:17:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:39.589 ************************************ 00:22:39.589 START TEST raid5f_state_function_test_sb 00:22:39.589 ************************************ 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84197 00:22:39.589 Process raid pid: 84197 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84197' 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84197 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84197 ']' 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.589 13:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.589 [2024-12-06 13:17:26.437356] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:22:39.589 [2024-12-06 13:17:26.437559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.848 [2024-12-06 13:17:26.617235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.848 [2024-12-06 13:17:26.766850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.107 [2024-12-06 13:17:26.999819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.107 [2024-12-06 13:17:26.999888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.674 [2024-12-06 13:17:27.493605] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:40.674 [2024-12-06 13:17:27.493712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:40.674 [2024-12-06 13:17:27.493731] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:40.674 [2024-12-06 13:17:27.493748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:40.674 [2024-12-06 13:17:27.493759] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:40.674 [2024-12-06 13:17:27.493774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:40.674 [2024-12-06 13:17:27.493784] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:40.674 [2024-12-06 13:17:27.493798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.674 "name": "Existed_Raid", 00:22:40.674 "uuid": "2eb70ef9-f0c1-4807-a6dd-2103f05fa3b5", 00:22:40.674 "strip_size_kb": 64, 00:22:40.674 "state": "configuring", 00:22:40.674 "raid_level": "raid5f", 00:22:40.674 "superblock": true, 00:22:40.674 "num_base_bdevs": 4, 00:22:40.674 "num_base_bdevs_discovered": 0, 00:22:40.674 "num_base_bdevs_operational": 4, 00:22:40.674 "base_bdevs_list": [ 00:22:40.674 { 00:22:40.674 "name": "BaseBdev1", 00:22:40.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.674 "is_configured": false, 00:22:40.674 "data_offset": 0, 00:22:40.674 "data_size": 0 00:22:40.674 }, 00:22:40.674 { 00:22:40.674 "name": "BaseBdev2", 00:22:40.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.674 "is_configured": false, 00:22:40.674 "data_offset": 0, 00:22:40.674 "data_size": 0 00:22:40.674 }, 00:22:40.674 { 00:22:40.674 "name": "BaseBdev3", 00:22:40.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.674 "is_configured": false, 00:22:40.674 "data_offset": 0, 00:22:40.674 "data_size": 0 00:22:40.674 }, 00:22:40.674 { 00:22:40.674 "name": "BaseBdev4", 00:22:40.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.674 "is_configured": false, 00:22:40.674 "data_offset": 0, 00:22:40.674 "data_size": 0 00:22:40.674 } 00:22:40.674 ] 00:22:40.674 }' 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.674 13:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.239 [2024-12-06 13:17:28.009615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:41.239 [2024-12-06 13:17:28.009702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.239 [2024-12-06 13:17:28.017586] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:41.239 [2024-12-06 13:17:28.017672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:41.239 [2024-12-06 13:17:28.017688] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:41.239 [2024-12-06 13:17:28.017703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:41.239 [2024-12-06 13:17:28.017713] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:41.239 [2024-12-06 13:17:28.017728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:41.239 [2024-12-06 13:17:28.017737] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:41.239 [2024-12-06 13:17:28.017751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.239 [2024-12-06 13:17:28.066559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:41.239 BaseBdev1 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.239 [ 00:22:41.239 { 00:22:41.239 "name": "BaseBdev1", 00:22:41.239 "aliases": [ 00:22:41.239 "ccddf671-0514-45c7-a7ad-05461f06c64a" 00:22:41.239 ], 00:22:41.239 "product_name": "Malloc disk", 00:22:41.239 "block_size": 512, 00:22:41.239 "num_blocks": 65536, 00:22:41.239 "uuid": "ccddf671-0514-45c7-a7ad-05461f06c64a", 00:22:41.239 "assigned_rate_limits": { 00:22:41.239 "rw_ios_per_sec": 0, 00:22:41.239 "rw_mbytes_per_sec": 0, 00:22:41.239 "r_mbytes_per_sec": 0, 00:22:41.239 "w_mbytes_per_sec": 0 00:22:41.239 }, 00:22:41.239 "claimed": true, 00:22:41.239 "claim_type": "exclusive_write", 00:22:41.239 "zoned": false, 00:22:41.239 "supported_io_types": { 00:22:41.239 "read": true, 00:22:41.239 "write": true, 00:22:41.239 "unmap": true, 00:22:41.239 "flush": true, 00:22:41.239 "reset": true, 00:22:41.239 "nvme_admin": false, 00:22:41.239 "nvme_io": false, 00:22:41.239 "nvme_io_md": false, 00:22:41.239 "write_zeroes": true, 00:22:41.239 "zcopy": true, 00:22:41.239 "get_zone_info": false, 00:22:41.239 "zone_management": false, 00:22:41.239 "zone_append": false, 00:22:41.239 "compare": false, 00:22:41.239 "compare_and_write": false, 00:22:41.239 "abort": true, 00:22:41.239 "seek_hole": false, 00:22:41.239 "seek_data": false, 00:22:41.239 "copy": true, 00:22:41.239 "nvme_iov_md": false 00:22:41.239 }, 00:22:41.239 "memory_domains": [ 00:22:41.239 { 00:22:41.239 "dma_device_id": "system", 00:22:41.239 "dma_device_type": 1 00:22:41.239 }, 00:22:41.239 { 00:22:41.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.239 "dma_device_type": 2 00:22:41.239 } 00:22:41.239 ], 00:22:41.239 "driver_specific": {} 00:22:41.239 } 00:22:41.239 ] 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.239 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.239 "name": "Existed_Raid", 00:22:41.239 "uuid": "ffb07293-26ce-483c-8363-42d29edf7c96", 00:22:41.239 "strip_size_kb": 64, 00:22:41.239 "state": "configuring", 00:22:41.239 "raid_level": "raid5f", 00:22:41.239 "superblock": true, 00:22:41.239 "num_base_bdevs": 4, 00:22:41.239 "num_base_bdevs_discovered": 1, 00:22:41.239 "num_base_bdevs_operational": 4, 00:22:41.239 "base_bdevs_list": [ 00:22:41.239 { 00:22:41.239 "name": "BaseBdev1", 00:22:41.239 "uuid": "ccddf671-0514-45c7-a7ad-05461f06c64a", 00:22:41.239 "is_configured": true, 00:22:41.239 "data_offset": 2048, 00:22:41.239 "data_size": 63488 00:22:41.239 }, 00:22:41.239 { 00:22:41.239 "name": "BaseBdev2", 00:22:41.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.239 "is_configured": false, 00:22:41.239 "data_offset": 0, 00:22:41.239 "data_size": 0 00:22:41.239 }, 00:22:41.239 { 00:22:41.239 "name": "BaseBdev3", 00:22:41.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.239 "is_configured": false, 00:22:41.239 "data_offset": 0, 00:22:41.239 "data_size": 0 00:22:41.239 }, 00:22:41.239 { 00:22:41.239 "name": "BaseBdev4", 00:22:41.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.239 "is_configured": false, 00:22:41.239 "data_offset": 0, 00:22:41.239 "data_size": 0 00:22:41.240 } 00:22:41.240 ] 00:22:41.240 }' 00:22:41.240 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.240 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.806 [2024-12-06 13:17:28.578884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:41.806 [2024-12-06 13:17:28.578983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.806 [2024-12-06 13:17:28.586949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:41.806 [2024-12-06 13:17:28.589568] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:41.806 [2024-12-06 13:17:28.589631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:41.806 [2024-12-06 13:17:28.589647] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:41.806 [2024-12-06 13:17:28.589664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:41.806 [2024-12-06 13:17:28.589675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:41.806 [2024-12-06 13:17:28.589689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.806 "name": "Existed_Raid", 00:22:41.806 "uuid": "7d5a61bc-421b-40b0-a9f7-ad6e05c9b70c", 00:22:41.806 "strip_size_kb": 64, 00:22:41.806 "state": "configuring", 00:22:41.806 "raid_level": "raid5f", 00:22:41.806 "superblock": true, 00:22:41.806 "num_base_bdevs": 4, 00:22:41.806 "num_base_bdevs_discovered": 1, 00:22:41.806 "num_base_bdevs_operational": 4, 00:22:41.806 "base_bdevs_list": [ 00:22:41.806 { 00:22:41.806 "name": "BaseBdev1", 00:22:41.806 "uuid": "ccddf671-0514-45c7-a7ad-05461f06c64a", 00:22:41.806 "is_configured": true, 00:22:41.806 "data_offset": 2048, 00:22:41.806 "data_size": 63488 00:22:41.806 }, 00:22:41.806 { 00:22:41.806 "name": "BaseBdev2", 00:22:41.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.806 "is_configured": false, 00:22:41.806 "data_offset": 0, 00:22:41.806 "data_size": 0 00:22:41.806 }, 00:22:41.806 { 00:22:41.806 "name": "BaseBdev3", 00:22:41.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.806 "is_configured": false, 00:22:41.806 "data_offset": 0, 00:22:41.806 "data_size": 0 00:22:41.806 }, 00:22:41.806 { 00:22:41.806 "name": "BaseBdev4", 00:22:41.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.806 "is_configured": false, 00:22:41.806 "data_offset": 0, 00:22:41.806 "data_size": 0 00:22:41.806 } 00:22:41.806 ] 00:22:41.806 }' 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.806 13:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.374 [2024-12-06 13:17:29.148955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:42.374 BaseBdev2 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.374 [ 00:22:42.374 { 00:22:42.374 "name": "BaseBdev2", 00:22:42.374 "aliases": [ 00:22:42.374 "4e7b2d11-ce0c-41d1-934f-ba305bdffc8c" 00:22:42.374 ], 00:22:42.374 "product_name": "Malloc disk", 00:22:42.374 "block_size": 512, 00:22:42.374 "num_blocks": 65536, 00:22:42.374 "uuid": "4e7b2d11-ce0c-41d1-934f-ba305bdffc8c", 00:22:42.374 "assigned_rate_limits": { 00:22:42.374 "rw_ios_per_sec": 0, 00:22:42.374 "rw_mbytes_per_sec": 0, 00:22:42.374 "r_mbytes_per_sec": 0, 00:22:42.374 "w_mbytes_per_sec": 0 00:22:42.374 }, 00:22:42.374 "claimed": true, 00:22:42.374 "claim_type": "exclusive_write", 00:22:42.374 "zoned": false, 00:22:42.374 "supported_io_types": { 00:22:42.374 "read": true, 00:22:42.374 "write": true, 00:22:42.374 "unmap": true, 00:22:42.374 "flush": true, 00:22:42.374 "reset": true, 00:22:42.374 "nvme_admin": false, 00:22:42.374 "nvme_io": false, 00:22:42.374 "nvme_io_md": false, 00:22:42.374 "write_zeroes": true, 00:22:42.374 "zcopy": true, 00:22:42.374 "get_zone_info": false, 00:22:42.374 "zone_management": false, 00:22:42.374 "zone_append": false, 00:22:42.374 "compare": false, 00:22:42.374 "compare_and_write": false, 00:22:42.374 "abort": true, 00:22:42.374 "seek_hole": false, 00:22:42.374 "seek_data": false, 00:22:42.374 "copy": true, 00:22:42.374 "nvme_iov_md": false 00:22:42.374 }, 00:22:42.374 "memory_domains": [ 00:22:42.374 { 00:22:42.374 "dma_device_id": "system", 00:22:42.374 "dma_device_type": 1 00:22:42.374 }, 00:22:42.374 { 00:22:42.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.374 "dma_device_type": 2 00:22:42.374 } 00:22:42.374 ], 00:22:42.374 "driver_specific": {} 00:22:42.374 } 00:22:42.374 ] 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.374 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.374 "name": "Existed_Raid", 00:22:42.374 "uuid": "7d5a61bc-421b-40b0-a9f7-ad6e05c9b70c", 00:22:42.374 "strip_size_kb": 64, 00:22:42.374 "state": "configuring", 00:22:42.374 "raid_level": "raid5f", 00:22:42.374 "superblock": true, 00:22:42.374 "num_base_bdevs": 4, 00:22:42.374 "num_base_bdevs_discovered": 2, 00:22:42.375 "num_base_bdevs_operational": 4, 00:22:42.375 "base_bdevs_list": [ 00:22:42.375 { 00:22:42.375 "name": "BaseBdev1", 00:22:42.375 "uuid": "ccddf671-0514-45c7-a7ad-05461f06c64a", 00:22:42.375 "is_configured": true, 00:22:42.375 "data_offset": 2048, 00:22:42.375 "data_size": 63488 00:22:42.375 }, 00:22:42.375 { 00:22:42.375 "name": "BaseBdev2", 00:22:42.375 "uuid": "4e7b2d11-ce0c-41d1-934f-ba305bdffc8c", 00:22:42.375 "is_configured": true, 00:22:42.375 "data_offset": 2048, 00:22:42.375 "data_size": 63488 00:22:42.375 }, 00:22:42.375 { 00:22:42.375 "name": "BaseBdev3", 00:22:42.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.375 "is_configured": false, 00:22:42.375 "data_offset": 0, 00:22:42.375 "data_size": 0 00:22:42.375 }, 00:22:42.375 { 00:22:42.375 "name": "BaseBdev4", 00:22:42.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.375 "is_configured": false, 00:22:42.375 "data_offset": 0, 00:22:42.375 "data_size": 0 00:22:42.375 } 00:22:42.375 ] 00:22:42.375 }' 00:22:42.375 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.375 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.942 [2024-12-06 13:17:29.739318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:42.942 BaseBdev3 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.942 [ 00:22:42.942 { 00:22:42.942 "name": "BaseBdev3", 00:22:42.942 "aliases": [ 00:22:42.942 "634002f0-8f0a-4f11-83b7-ff37deee6978" 00:22:42.942 ], 00:22:42.942 "product_name": "Malloc disk", 00:22:42.942 "block_size": 512, 00:22:42.942 "num_blocks": 65536, 00:22:42.942 "uuid": "634002f0-8f0a-4f11-83b7-ff37deee6978", 00:22:42.942 "assigned_rate_limits": { 00:22:42.942 "rw_ios_per_sec": 0, 00:22:42.942 "rw_mbytes_per_sec": 0, 00:22:42.942 "r_mbytes_per_sec": 0, 00:22:42.942 "w_mbytes_per_sec": 0 00:22:42.942 }, 00:22:42.942 "claimed": true, 00:22:42.942 "claim_type": "exclusive_write", 00:22:42.942 "zoned": false, 00:22:42.942 "supported_io_types": { 00:22:42.942 "read": true, 00:22:42.942 "write": true, 00:22:42.942 "unmap": true, 00:22:42.942 "flush": true, 00:22:42.942 "reset": true, 00:22:42.942 "nvme_admin": false, 00:22:42.942 "nvme_io": false, 00:22:42.942 "nvme_io_md": false, 00:22:42.942 "write_zeroes": true, 00:22:42.942 "zcopy": true, 00:22:42.942 "get_zone_info": false, 00:22:42.942 "zone_management": false, 00:22:42.942 "zone_append": false, 00:22:42.942 "compare": false, 00:22:42.942 "compare_and_write": false, 00:22:42.942 "abort": true, 00:22:42.942 "seek_hole": false, 00:22:42.942 "seek_data": false, 00:22:42.942 "copy": true, 00:22:42.942 "nvme_iov_md": false 00:22:42.942 }, 00:22:42.942 "memory_domains": [ 00:22:42.942 { 00:22:42.942 "dma_device_id": "system", 00:22:42.942 "dma_device_type": 1 00:22:42.942 }, 00:22:42.942 { 00:22:42.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.942 "dma_device_type": 2 00:22:42.942 } 00:22:42.942 ], 00:22:42.942 "driver_specific": {} 00:22:42.942 } 00:22:42.942 ] 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.942 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.943 "name": "Existed_Raid", 00:22:42.943 "uuid": "7d5a61bc-421b-40b0-a9f7-ad6e05c9b70c", 00:22:42.943 "strip_size_kb": 64, 00:22:42.943 "state": "configuring", 00:22:42.943 "raid_level": "raid5f", 00:22:42.943 "superblock": true, 00:22:42.943 "num_base_bdevs": 4, 00:22:42.943 "num_base_bdevs_discovered": 3, 00:22:42.943 "num_base_bdevs_operational": 4, 00:22:42.943 "base_bdevs_list": [ 00:22:42.943 { 00:22:42.943 "name": "BaseBdev1", 00:22:42.943 "uuid": "ccddf671-0514-45c7-a7ad-05461f06c64a", 00:22:42.943 "is_configured": true, 00:22:42.943 "data_offset": 2048, 00:22:42.943 "data_size": 63488 00:22:42.943 }, 00:22:42.943 { 00:22:42.943 "name": "BaseBdev2", 00:22:42.943 "uuid": "4e7b2d11-ce0c-41d1-934f-ba305bdffc8c", 00:22:42.943 "is_configured": true, 00:22:42.943 "data_offset": 2048, 00:22:42.943 "data_size": 63488 00:22:42.943 }, 00:22:42.943 { 00:22:42.943 "name": "BaseBdev3", 00:22:42.943 "uuid": "634002f0-8f0a-4f11-83b7-ff37deee6978", 00:22:42.943 "is_configured": true, 00:22:42.943 "data_offset": 2048, 00:22:42.943 "data_size": 63488 00:22:42.943 }, 00:22:42.943 { 00:22:42.943 "name": "BaseBdev4", 00:22:42.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.943 "is_configured": false, 00:22:42.943 "data_offset": 0, 00:22:42.943 "data_size": 0 00:22:42.943 } 00:22:42.943 ] 00:22:42.943 }' 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.943 13:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.508 [2024-12-06 13:17:30.302473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:43.508 [2024-12-06 13:17:30.302986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:43.508 [2024-12-06 13:17:30.303014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:43.508 [2024-12-06 13:17:30.303376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:43.508 BaseBdev4 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.508 [2024-12-06 13:17:30.310596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:43.508 [2024-12-06 13:17:30.310625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:43.508 [2024-12-06 13:17:30.310980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:43.508 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.509 [ 00:22:43.509 { 00:22:43.509 "name": "BaseBdev4", 00:22:43.509 "aliases": [ 00:22:43.509 "9d1f489d-3709-4e2d-be6a-f91ad2e8b756" 00:22:43.509 ], 00:22:43.509 "product_name": "Malloc disk", 00:22:43.509 "block_size": 512, 00:22:43.509 "num_blocks": 65536, 00:22:43.509 "uuid": "9d1f489d-3709-4e2d-be6a-f91ad2e8b756", 00:22:43.509 "assigned_rate_limits": { 00:22:43.509 "rw_ios_per_sec": 0, 00:22:43.509 "rw_mbytes_per_sec": 0, 00:22:43.509 "r_mbytes_per_sec": 0, 00:22:43.509 "w_mbytes_per_sec": 0 00:22:43.509 }, 00:22:43.509 "claimed": true, 00:22:43.509 "claim_type": "exclusive_write", 00:22:43.509 "zoned": false, 00:22:43.509 "supported_io_types": { 00:22:43.509 "read": true, 00:22:43.509 "write": true, 00:22:43.509 "unmap": true, 00:22:43.509 "flush": true, 00:22:43.509 "reset": true, 00:22:43.509 "nvme_admin": false, 00:22:43.509 "nvme_io": false, 00:22:43.509 "nvme_io_md": false, 00:22:43.509 "write_zeroes": true, 00:22:43.509 "zcopy": true, 00:22:43.509 "get_zone_info": false, 00:22:43.509 "zone_management": false, 00:22:43.509 "zone_append": false, 00:22:43.509 "compare": false, 00:22:43.509 "compare_and_write": false, 00:22:43.509 "abort": true, 00:22:43.509 "seek_hole": false, 00:22:43.509 "seek_data": false, 00:22:43.509 "copy": true, 00:22:43.509 "nvme_iov_md": false 00:22:43.509 }, 00:22:43.509 "memory_domains": [ 00:22:43.509 { 00:22:43.509 "dma_device_id": "system", 00:22:43.509 "dma_device_type": 1 00:22:43.509 }, 00:22:43.509 { 00:22:43.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.509 "dma_device_type": 2 00:22:43.509 } 00:22:43.509 ], 00:22:43.509 "driver_specific": {} 00:22:43.509 } 00:22:43.509 ] 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.509 "name": "Existed_Raid", 00:22:43.509 "uuid": "7d5a61bc-421b-40b0-a9f7-ad6e05c9b70c", 00:22:43.509 "strip_size_kb": 64, 00:22:43.509 "state": "online", 00:22:43.509 "raid_level": "raid5f", 00:22:43.509 "superblock": true, 00:22:43.509 "num_base_bdevs": 4, 00:22:43.509 "num_base_bdevs_discovered": 4, 00:22:43.509 "num_base_bdevs_operational": 4, 00:22:43.509 "base_bdevs_list": [ 00:22:43.509 { 00:22:43.509 "name": "BaseBdev1", 00:22:43.509 "uuid": "ccddf671-0514-45c7-a7ad-05461f06c64a", 00:22:43.509 "is_configured": true, 00:22:43.509 "data_offset": 2048, 00:22:43.509 "data_size": 63488 00:22:43.509 }, 00:22:43.509 { 00:22:43.509 "name": "BaseBdev2", 00:22:43.509 "uuid": "4e7b2d11-ce0c-41d1-934f-ba305bdffc8c", 00:22:43.509 "is_configured": true, 00:22:43.509 "data_offset": 2048, 00:22:43.509 "data_size": 63488 00:22:43.509 }, 00:22:43.509 { 00:22:43.509 "name": "BaseBdev3", 00:22:43.509 "uuid": "634002f0-8f0a-4f11-83b7-ff37deee6978", 00:22:43.509 "is_configured": true, 00:22:43.509 "data_offset": 2048, 00:22:43.509 "data_size": 63488 00:22:43.509 }, 00:22:43.509 { 00:22:43.509 "name": "BaseBdev4", 00:22:43.509 "uuid": "9d1f489d-3709-4e2d-be6a-f91ad2e8b756", 00:22:43.509 "is_configured": true, 00:22:43.509 "data_offset": 2048, 00:22:43.509 "data_size": 63488 00:22:43.509 } 00:22:43.509 ] 00:22:43.509 }' 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.509 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.075 [2024-12-06 13:17:30.871437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.075 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:44.075 "name": "Existed_Raid", 00:22:44.075 "aliases": [ 00:22:44.075 "7d5a61bc-421b-40b0-a9f7-ad6e05c9b70c" 00:22:44.075 ], 00:22:44.075 "product_name": "Raid Volume", 00:22:44.075 "block_size": 512, 00:22:44.075 "num_blocks": 190464, 00:22:44.075 "uuid": "7d5a61bc-421b-40b0-a9f7-ad6e05c9b70c", 00:22:44.075 "assigned_rate_limits": { 00:22:44.075 "rw_ios_per_sec": 0, 00:22:44.075 "rw_mbytes_per_sec": 0, 00:22:44.075 "r_mbytes_per_sec": 0, 00:22:44.075 "w_mbytes_per_sec": 0 00:22:44.075 }, 00:22:44.075 "claimed": false, 00:22:44.075 "zoned": false, 00:22:44.075 "supported_io_types": { 00:22:44.075 "read": true, 00:22:44.075 "write": true, 00:22:44.075 "unmap": false, 00:22:44.075 "flush": false, 00:22:44.075 "reset": true, 00:22:44.075 "nvme_admin": false, 00:22:44.075 "nvme_io": false, 00:22:44.075 "nvme_io_md": false, 00:22:44.075 "write_zeroes": true, 00:22:44.075 "zcopy": false, 00:22:44.075 "get_zone_info": false, 00:22:44.075 "zone_management": false, 00:22:44.075 "zone_append": false, 00:22:44.075 "compare": false, 00:22:44.075 "compare_and_write": false, 00:22:44.075 "abort": false, 00:22:44.075 "seek_hole": false, 00:22:44.075 "seek_data": false, 00:22:44.075 "copy": false, 00:22:44.075 "nvme_iov_md": false 00:22:44.075 }, 00:22:44.075 "driver_specific": { 00:22:44.075 "raid": { 00:22:44.075 "uuid": "7d5a61bc-421b-40b0-a9f7-ad6e05c9b70c", 00:22:44.075 "strip_size_kb": 64, 00:22:44.075 "state": "online", 00:22:44.075 "raid_level": "raid5f", 00:22:44.075 "superblock": true, 00:22:44.075 "num_base_bdevs": 4, 00:22:44.075 "num_base_bdevs_discovered": 4, 00:22:44.075 "num_base_bdevs_operational": 4, 00:22:44.075 "base_bdevs_list": [ 00:22:44.075 { 00:22:44.075 "name": "BaseBdev1", 00:22:44.075 "uuid": "ccddf671-0514-45c7-a7ad-05461f06c64a", 00:22:44.075 "is_configured": true, 00:22:44.075 "data_offset": 2048, 00:22:44.075 "data_size": 63488 00:22:44.075 }, 00:22:44.075 { 00:22:44.075 "name": "BaseBdev2", 00:22:44.075 "uuid": "4e7b2d11-ce0c-41d1-934f-ba305bdffc8c", 00:22:44.075 "is_configured": true, 00:22:44.075 "data_offset": 2048, 00:22:44.075 "data_size": 63488 00:22:44.075 }, 00:22:44.075 { 00:22:44.075 "name": "BaseBdev3", 00:22:44.075 "uuid": "634002f0-8f0a-4f11-83b7-ff37deee6978", 00:22:44.075 "is_configured": true, 00:22:44.075 "data_offset": 2048, 00:22:44.075 "data_size": 63488 00:22:44.075 }, 00:22:44.075 { 00:22:44.076 "name": "BaseBdev4", 00:22:44.076 "uuid": "9d1f489d-3709-4e2d-be6a-f91ad2e8b756", 00:22:44.076 "is_configured": true, 00:22:44.076 "data_offset": 2048, 00:22:44.076 "data_size": 63488 00:22:44.076 } 00:22:44.076 ] 00:22:44.076 } 00:22:44.076 } 00:22:44.076 }' 00:22:44.076 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:44.076 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:44.076 BaseBdev2 00:22:44.076 BaseBdev3 00:22:44.076 BaseBdev4' 00:22:44.076 13:17:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.076 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.333 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.333 [2024-12-06 13:17:31.251343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.617 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.617 "name": "Existed_Raid", 00:22:44.617 "uuid": "7d5a61bc-421b-40b0-a9f7-ad6e05c9b70c", 00:22:44.617 "strip_size_kb": 64, 00:22:44.617 "state": "online", 00:22:44.617 "raid_level": "raid5f", 00:22:44.617 "superblock": true, 00:22:44.617 "num_base_bdevs": 4, 00:22:44.617 "num_base_bdevs_discovered": 3, 00:22:44.617 "num_base_bdevs_operational": 3, 00:22:44.617 "base_bdevs_list": [ 00:22:44.617 { 00:22:44.617 "name": null, 00:22:44.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.617 "is_configured": false, 00:22:44.617 "data_offset": 0, 00:22:44.617 "data_size": 63488 00:22:44.617 }, 00:22:44.617 { 00:22:44.617 "name": "BaseBdev2", 00:22:44.617 "uuid": "4e7b2d11-ce0c-41d1-934f-ba305bdffc8c", 00:22:44.617 "is_configured": true, 00:22:44.617 "data_offset": 2048, 00:22:44.617 "data_size": 63488 00:22:44.617 }, 00:22:44.617 { 00:22:44.617 "name": "BaseBdev3", 00:22:44.618 "uuid": "634002f0-8f0a-4f11-83b7-ff37deee6978", 00:22:44.618 "is_configured": true, 00:22:44.618 "data_offset": 2048, 00:22:44.618 "data_size": 63488 00:22:44.618 }, 00:22:44.618 { 00:22:44.618 "name": "BaseBdev4", 00:22:44.618 "uuid": "9d1f489d-3709-4e2d-be6a-f91ad2e8b756", 00:22:44.618 "is_configured": true, 00:22:44.618 "data_offset": 2048, 00:22:44.618 "data_size": 63488 00:22:44.618 } 00:22:44.618 ] 00:22:44.618 }' 00:22:44.618 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.618 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.875 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:44.875 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:44.875 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.875 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:44.875 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.875 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.133 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.133 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:45.133 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:45.133 13:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:45.133 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.133 13:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.133 [2024-12-06 13:17:31.928551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:45.133 [2024-12-06 13:17:31.928788] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.133 [2024-12-06 13:17:32.020277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.133 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.133 [2024-12-06 13:17:32.080362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:45.391 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.391 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.392 [2024-12-06 13:17:32.232585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:45.392 [2024-12-06 13:17:32.232658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.392 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.651 BaseBdev2 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.651 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.651 [ 00:22:45.651 { 00:22:45.651 "name": "BaseBdev2", 00:22:45.651 "aliases": [ 00:22:45.651 "7c7d8b54-9ef5-4eac-8731-7a84375adf95" 00:22:45.651 ], 00:22:45.652 "product_name": "Malloc disk", 00:22:45.652 "block_size": 512, 00:22:45.652 "num_blocks": 65536, 00:22:45.652 "uuid": "7c7d8b54-9ef5-4eac-8731-7a84375adf95", 00:22:45.652 "assigned_rate_limits": { 00:22:45.652 "rw_ios_per_sec": 0, 00:22:45.652 "rw_mbytes_per_sec": 0, 00:22:45.652 "r_mbytes_per_sec": 0, 00:22:45.652 "w_mbytes_per_sec": 0 00:22:45.652 }, 00:22:45.652 "claimed": false, 00:22:45.652 "zoned": false, 00:22:45.652 "supported_io_types": { 00:22:45.652 "read": true, 00:22:45.652 "write": true, 00:22:45.652 "unmap": true, 00:22:45.652 "flush": true, 00:22:45.652 "reset": true, 00:22:45.652 "nvme_admin": false, 00:22:45.652 "nvme_io": false, 00:22:45.652 "nvme_io_md": false, 00:22:45.652 "write_zeroes": true, 00:22:45.652 "zcopy": true, 00:22:45.652 "get_zone_info": false, 00:22:45.652 "zone_management": false, 00:22:45.652 "zone_append": false, 00:22:45.652 "compare": false, 00:22:45.652 "compare_and_write": false, 00:22:45.652 "abort": true, 00:22:45.652 "seek_hole": false, 00:22:45.652 "seek_data": false, 00:22:45.652 "copy": true, 00:22:45.652 "nvme_iov_md": false 00:22:45.652 }, 00:22:45.652 "memory_domains": [ 00:22:45.652 { 00:22:45.652 "dma_device_id": "system", 00:22:45.652 "dma_device_type": 1 00:22:45.652 }, 00:22:45.652 { 00:22:45.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.652 "dma_device_type": 2 00:22:45.652 } 00:22:45.652 ], 00:22:45.652 "driver_specific": {} 00:22:45.652 } 00:22:45.652 ] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.652 BaseBdev3 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.652 [ 00:22:45.652 { 00:22:45.652 "name": "BaseBdev3", 00:22:45.652 "aliases": [ 00:22:45.652 "cec8000f-d3f1-4eac-bd62-69ff6c0e695e" 00:22:45.652 ], 00:22:45.652 "product_name": "Malloc disk", 00:22:45.652 "block_size": 512, 00:22:45.652 "num_blocks": 65536, 00:22:45.652 "uuid": "cec8000f-d3f1-4eac-bd62-69ff6c0e695e", 00:22:45.652 "assigned_rate_limits": { 00:22:45.652 "rw_ios_per_sec": 0, 00:22:45.652 "rw_mbytes_per_sec": 0, 00:22:45.652 "r_mbytes_per_sec": 0, 00:22:45.652 "w_mbytes_per_sec": 0 00:22:45.652 }, 00:22:45.652 "claimed": false, 00:22:45.652 "zoned": false, 00:22:45.652 "supported_io_types": { 00:22:45.652 "read": true, 00:22:45.652 "write": true, 00:22:45.652 "unmap": true, 00:22:45.652 "flush": true, 00:22:45.652 "reset": true, 00:22:45.652 "nvme_admin": false, 00:22:45.652 "nvme_io": false, 00:22:45.652 "nvme_io_md": false, 00:22:45.652 "write_zeroes": true, 00:22:45.652 "zcopy": true, 00:22:45.652 "get_zone_info": false, 00:22:45.652 "zone_management": false, 00:22:45.652 "zone_append": false, 00:22:45.652 "compare": false, 00:22:45.652 "compare_and_write": false, 00:22:45.652 "abort": true, 00:22:45.652 "seek_hole": false, 00:22:45.652 "seek_data": false, 00:22:45.652 "copy": true, 00:22:45.652 "nvme_iov_md": false 00:22:45.652 }, 00:22:45.652 "memory_domains": [ 00:22:45.652 { 00:22:45.652 "dma_device_id": "system", 00:22:45.652 "dma_device_type": 1 00:22:45.652 }, 00:22:45.652 { 00:22:45.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.652 "dma_device_type": 2 00:22:45.652 } 00:22:45.652 ], 00:22:45.652 "driver_specific": {} 00:22:45.652 } 00:22:45.652 ] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.652 BaseBdev4 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.652 [ 00:22:45.652 { 00:22:45.652 "name": "BaseBdev4", 00:22:45.652 "aliases": [ 00:22:45.652 "5e9679d4-1a6a-408e-a36f-2c25aec03d68" 00:22:45.652 ], 00:22:45.652 "product_name": "Malloc disk", 00:22:45.652 "block_size": 512, 00:22:45.652 "num_blocks": 65536, 00:22:45.652 "uuid": "5e9679d4-1a6a-408e-a36f-2c25aec03d68", 00:22:45.652 "assigned_rate_limits": { 00:22:45.652 "rw_ios_per_sec": 0, 00:22:45.652 "rw_mbytes_per_sec": 0, 00:22:45.652 "r_mbytes_per_sec": 0, 00:22:45.652 "w_mbytes_per_sec": 0 00:22:45.652 }, 00:22:45.652 "claimed": false, 00:22:45.652 "zoned": false, 00:22:45.652 "supported_io_types": { 00:22:45.652 "read": true, 00:22:45.652 "write": true, 00:22:45.652 "unmap": true, 00:22:45.652 "flush": true, 00:22:45.652 "reset": true, 00:22:45.652 "nvme_admin": false, 00:22:45.652 "nvme_io": false, 00:22:45.652 "nvme_io_md": false, 00:22:45.652 "write_zeroes": true, 00:22:45.652 "zcopy": true, 00:22:45.652 "get_zone_info": false, 00:22:45.652 "zone_management": false, 00:22:45.652 "zone_append": false, 00:22:45.652 "compare": false, 00:22:45.652 "compare_and_write": false, 00:22:45.652 "abort": true, 00:22:45.652 "seek_hole": false, 00:22:45.652 "seek_data": false, 00:22:45.652 "copy": true, 00:22:45.652 "nvme_iov_md": false 00:22:45.652 }, 00:22:45.652 "memory_domains": [ 00:22:45.652 { 00:22:45.652 "dma_device_id": "system", 00:22:45.652 "dma_device_type": 1 00:22:45.652 }, 00:22:45.652 { 00:22:45.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.652 "dma_device_type": 2 00:22:45.652 } 00:22:45.652 ], 00:22:45.652 "driver_specific": {} 00:22:45.652 } 00:22:45.652 ] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.652 [2024-12-06 13:17:32.616035] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:45.652 [2024-12-06 13:17:32.616110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:45.652 [2024-12-06 13:17:32.616156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:45.652 [2024-12-06 13:17:32.618861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:45.652 [2024-12-06 13:17:32.618951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.652 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.911 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.912 "name": "Existed_Raid", 00:22:45.912 "uuid": "2c2cd111-e1bb-49d1-a6d7-373284e7ec27", 00:22:45.912 "strip_size_kb": 64, 00:22:45.912 "state": "configuring", 00:22:45.912 "raid_level": "raid5f", 00:22:45.912 "superblock": true, 00:22:45.912 "num_base_bdevs": 4, 00:22:45.912 "num_base_bdevs_discovered": 3, 00:22:45.912 "num_base_bdevs_operational": 4, 00:22:45.912 "base_bdevs_list": [ 00:22:45.912 { 00:22:45.912 "name": "BaseBdev1", 00:22:45.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.912 "is_configured": false, 00:22:45.912 "data_offset": 0, 00:22:45.912 "data_size": 0 00:22:45.912 }, 00:22:45.912 { 00:22:45.912 "name": "BaseBdev2", 00:22:45.912 "uuid": "7c7d8b54-9ef5-4eac-8731-7a84375adf95", 00:22:45.912 "is_configured": true, 00:22:45.912 "data_offset": 2048, 00:22:45.912 "data_size": 63488 00:22:45.912 }, 00:22:45.912 { 00:22:45.912 "name": "BaseBdev3", 00:22:45.912 "uuid": "cec8000f-d3f1-4eac-bd62-69ff6c0e695e", 00:22:45.912 "is_configured": true, 00:22:45.912 "data_offset": 2048, 00:22:45.912 "data_size": 63488 00:22:45.912 }, 00:22:45.912 { 00:22:45.912 "name": "BaseBdev4", 00:22:45.912 "uuid": "5e9679d4-1a6a-408e-a36f-2c25aec03d68", 00:22:45.912 "is_configured": true, 00:22:45.912 "data_offset": 2048, 00:22:45.912 "data_size": 63488 00:22:45.912 } 00:22:45.912 ] 00:22:45.912 }' 00:22:45.912 13:17:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.912 13:17:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.170 [2024-12-06 13:17:33.176234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.170 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.427 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.427 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.427 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.427 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.427 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.427 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.427 "name": "Existed_Raid", 00:22:46.427 "uuid": "2c2cd111-e1bb-49d1-a6d7-373284e7ec27", 00:22:46.427 "strip_size_kb": 64, 00:22:46.427 "state": "configuring", 00:22:46.427 "raid_level": "raid5f", 00:22:46.427 "superblock": true, 00:22:46.427 "num_base_bdevs": 4, 00:22:46.427 "num_base_bdevs_discovered": 2, 00:22:46.427 "num_base_bdevs_operational": 4, 00:22:46.427 "base_bdevs_list": [ 00:22:46.427 { 00:22:46.427 "name": "BaseBdev1", 00:22:46.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.427 "is_configured": false, 00:22:46.427 "data_offset": 0, 00:22:46.427 "data_size": 0 00:22:46.427 }, 00:22:46.427 { 00:22:46.427 "name": null, 00:22:46.427 "uuid": "7c7d8b54-9ef5-4eac-8731-7a84375adf95", 00:22:46.427 "is_configured": false, 00:22:46.427 "data_offset": 0, 00:22:46.427 "data_size": 63488 00:22:46.427 }, 00:22:46.427 { 00:22:46.427 "name": "BaseBdev3", 00:22:46.427 "uuid": "cec8000f-d3f1-4eac-bd62-69ff6c0e695e", 00:22:46.427 "is_configured": true, 00:22:46.428 "data_offset": 2048, 00:22:46.428 "data_size": 63488 00:22:46.428 }, 00:22:46.428 { 00:22:46.428 "name": "BaseBdev4", 00:22:46.428 "uuid": "5e9679d4-1a6a-408e-a36f-2c25aec03d68", 00:22:46.428 "is_configured": true, 00:22:46.428 "data_offset": 2048, 00:22:46.428 "data_size": 63488 00:22:46.428 } 00:22:46.428 ] 00:22:46.428 }' 00:22:46.428 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.428 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.994 [2024-12-06 13:17:33.805734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.994 BaseBdev1 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.994 [ 00:22:46.994 { 00:22:46.994 "name": "BaseBdev1", 00:22:46.994 "aliases": [ 00:22:46.994 "5f006e48-6a14-4425-a1ef-c714d5e24b20" 00:22:46.994 ], 00:22:46.994 "product_name": "Malloc disk", 00:22:46.994 "block_size": 512, 00:22:46.994 "num_blocks": 65536, 00:22:46.994 "uuid": "5f006e48-6a14-4425-a1ef-c714d5e24b20", 00:22:46.994 "assigned_rate_limits": { 00:22:46.994 "rw_ios_per_sec": 0, 00:22:46.994 "rw_mbytes_per_sec": 0, 00:22:46.994 "r_mbytes_per_sec": 0, 00:22:46.994 "w_mbytes_per_sec": 0 00:22:46.994 }, 00:22:46.994 "claimed": true, 00:22:46.994 "claim_type": "exclusive_write", 00:22:46.994 "zoned": false, 00:22:46.994 "supported_io_types": { 00:22:46.994 "read": true, 00:22:46.994 "write": true, 00:22:46.994 "unmap": true, 00:22:46.994 "flush": true, 00:22:46.994 "reset": true, 00:22:46.994 "nvme_admin": false, 00:22:46.994 "nvme_io": false, 00:22:46.994 "nvme_io_md": false, 00:22:46.994 "write_zeroes": true, 00:22:46.994 "zcopy": true, 00:22:46.994 "get_zone_info": false, 00:22:46.994 "zone_management": false, 00:22:46.994 "zone_append": false, 00:22:46.994 "compare": false, 00:22:46.994 "compare_and_write": false, 00:22:46.994 "abort": true, 00:22:46.994 "seek_hole": false, 00:22:46.994 "seek_data": false, 00:22:46.994 "copy": true, 00:22:46.994 "nvme_iov_md": false 00:22:46.994 }, 00:22:46.994 "memory_domains": [ 00:22:46.994 { 00:22:46.994 "dma_device_id": "system", 00:22:46.994 "dma_device_type": 1 00:22:46.994 }, 00:22:46.994 { 00:22:46.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.994 "dma_device_type": 2 00:22:46.994 } 00:22:46.994 ], 00:22:46.994 "driver_specific": {} 00:22:46.994 } 00:22:46.994 ] 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.994 "name": "Existed_Raid", 00:22:46.994 "uuid": "2c2cd111-e1bb-49d1-a6d7-373284e7ec27", 00:22:46.994 "strip_size_kb": 64, 00:22:46.994 "state": "configuring", 00:22:46.994 "raid_level": "raid5f", 00:22:46.994 "superblock": true, 00:22:46.994 "num_base_bdevs": 4, 00:22:46.994 "num_base_bdevs_discovered": 3, 00:22:46.994 "num_base_bdevs_operational": 4, 00:22:46.994 "base_bdevs_list": [ 00:22:46.994 { 00:22:46.994 "name": "BaseBdev1", 00:22:46.994 "uuid": "5f006e48-6a14-4425-a1ef-c714d5e24b20", 00:22:46.994 "is_configured": true, 00:22:46.994 "data_offset": 2048, 00:22:46.994 "data_size": 63488 00:22:46.994 }, 00:22:46.994 { 00:22:46.994 "name": null, 00:22:46.994 "uuid": "7c7d8b54-9ef5-4eac-8731-7a84375adf95", 00:22:46.994 "is_configured": false, 00:22:46.994 "data_offset": 0, 00:22:46.994 "data_size": 63488 00:22:46.994 }, 00:22:46.994 { 00:22:46.994 "name": "BaseBdev3", 00:22:46.994 "uuid": "cec8000f-d3f1-4eac-bd62-69ff6c0e695e", 00:22:46.994 "is_configured": true, 00:22:46.994 "data_offset": 2048, 00:22:46.994 "data_size": 63488 00:22:46.994 }, 00:22:46.994 { 00:22:46.994 "name": "BaseBdev4", 00:22:46.994 "uuid": "5e9679d4-1a6a-408e-a36f-2c25aec03d68", 00:22:46.994 "is_configured": true, 00:22:46.994 "data_offset": 2048, 00:22:46.994 "data_size": 63488 00:22:46.994 } 00:22:46.994 ] 00:22:46.994 }' 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.994 13:17:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.561 [2024-12-06 13:17:34.398034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.561 "name": "Existed_Raid", 00:22:47.561 "uuid": "2c2cd111-e1bb-49d1-a6d7-373284e7ec27", 00:22:47.561 "strip_size_kb": 64, 00:22:47.561 "state": "configuring", 00:22:47.561 "raid_level": "raid5f", 00:22:47.561 "superblock": true, 00:22:47.561 "num_base_bdevs": 4, 00:22:47.561 "num_base_bdevs_discovered": 2, 00:22:47.561 "num_base_bdevs_operational": 4, 00:22:47.561 "base_bdevs_list": [ 00:22:47.561 { 00:22:47.561 "name": "BaseBdev1", 00:22:47.561 "uuid": "5f006e48-6a14-4425-a1ef-c714d5e24b20", 00:22:47.561 "is_configured": true, 00:22:47.561 "data_offset": 2048, 00:22:47.561 "data_size": 63488 00:22:47.561 }, 00:22:47.561 { 00:22:47.561 "name": null, 00:22:47.561 "uuid": "7c7d8b54-9ef5-4eac-8731-7a84375adf95", 00:22:47.561 "is_configured": false, 00:22:47.561 "data_offset": 0, 00:22:47.561 "data_size": 63488 00:22:47.561 }, 00:22:47.561 { 00:22:47.561 "name": null, 00:22:47.561 "uuid": "cec8000f-d3f1-4eac-bd62-69ff6c0e695e", 00:22:47.561 "is_configured": false, 00:22:47.561 "data_offset": 0, 00:22:47.561 "data_size": 63488 00:22:47.561 }, 00:22:47.561 { 00:22:47.561 "name": "BaseBdev4", 00:22:47.561 "uuid": "5e9679d4-1a6a-408e-a36f-2c25aec03d68", 00:22:47.561 "is_configured": true, 00:22:47.561 "data_offset": 2048, 00:22:47.561 "data_size": 63488 00:22:47.561 } 00:22:47.561 ] 00:22:47.561 }' 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.561 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.128 [2024-12-06 13:17:34.974160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.128 13:17:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.128 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.128 "name": "Existed_Raid", 00:22:48.128 "uuid": "2c2cd111-e1bb-49d1-a6d7-373284e7ec27", 00:22:48.128 "strip_size_kb": 64, 00:22:48.128 "state": "configuring", 00:22:48.128 "raid_level": "raid5f", 00:22:48.128 "superblock": true, 00:22:48.128 "num_base_bdevs": 4, 00:22:48.128 "num_base_bdevs_discovered": 3, 00:22:48.128 "num_base_bdevs_operational": 4, 00:22:48.128 "base_bdevs_list": [ 00:22:48.128 { 00:22:48.128 "name": "BaseBdev1", 00:22:48.128 "uuid": "5f006e48-6a14-4425-a1ef-c714d5e24b20", 00:22:48.128 "is_configured": true, 00:22:48.128 "data_offset": 2048, 00:22:48.128 "data_size": 63488 00:22:48.128 }, 00:22:48.128 { 00:22:48.128 "name": null, 00:22:48.129 "uuid": "7c7d8b54-9ef5-4eac-8731-7a84375adf95", 00:22:48.129 "is_configured": false, 00:22:48.129 "data_offset": 0, 00:22:48.129 "data_size": 63488 00:22:48.129 }, 00:22:48.129 { 00:22:48.129 "name": "BaseBdev3", 00:22:48.129 "uuid": "cec8000f-d3f1-4eac-bd62-69ff6c0e695e", 00:22:48.129 "is_configured": true, 00:22:48.129 "data_offset": 2048, 00:22:48.129 "data_size": 63488 00:22:48.129 }, 00:22:48.129 { 00:22:48.129 "name": "BaseBdev4", 00:22:48.129 "uuid": "5e9679d4-1a6a-408e-a36f-2c25aec03d68", 00:22:48.129 "is_configured": true, 00:22:48.129 "data_offset": 2048, 00:22:48.129 "data_size": 63488 00:22:48.129 } 00:22:48.129 ] 00:22:48.129 }' 00:22:48.129 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.129 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.695 [2024-12-06 13:17:35.534352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.695 "name": "Existed_Raid", 00:22:48.695 "uuid": "2c2cd111-e1bb-49d1-a6d7-373284e7ec27", 00:22:48.695 "strip_size_kb": 64, 00:22:48.695 "state": "configuring", 00:22:48.695 "raid_level": "raid5f", 00:22:48.695 "superblock": true, 00:22:48.695 "num_base_bdevs": 4, 00:22:48.695 "num_base_bdevs_discovered": 2, 00:22:48.695 "num_base_bdevs_operational": 4, 00:22:48.695 "base_bdevs_list": [ 00:22:48.695 { 00:22:48.695 "name": null, 00:22:48.695 "uuid": "5f006e48-6a14-4425-a1ef-c714d5e24b20", 00:22:48.695 "is_configured": false, 00:22:48.695 "data_offset": 0, 00:22:48.695 "data_size": 63488 00:22:48.695 }, 00:22:48.695 { 00:22:48.695 "name": null, 00:22:48.695 "uuid": "7c7d8b54-9ef5-4eac-8731-7a84375adf95", 00:22:48.695 "is_configured": false, 00:22:48.695 "data_offset": 0, 00:22:48.695 "data_size": 63488 00:22:48.695 }, 00:22:48.695 { 00:22:48.695 "name": "BaseBdev3", 00:22:48.695 "uuid": "cec8000f-d3f1-4eac-bd62-69ff6c0e695e", 00:22:48.695 "is_configured": true, 00:22:48.695 "data_offset": 2048, 00:22:48.695 "data_size": 63488 00:22:48.695 }, 00:22:48.695 { 00:22:48.695 "name": "BaseBdev4", 00:22:48.695 "uuid": "5e9679d4-1a6a-408e-a36f-2c25aec03d68", 00:22:48.695 "is_configured": true, 00:22:48.695 "data_offset": 2048, 00:22:48.695 "data_size": 63488 00:22:48.695 } 00:22:48.695 ] 00:22:48.695 }' 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.695 13:17:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.261 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:49.261 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.261 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.261 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.261 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.261 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:49.261 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:49.261 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.261 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.261 [2024-12-06 13:17:36.167662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.262 "name": "Existed_Raid", 00:22:49.262 "uuid": "2c2cd111-e1bb-49d1-a6d7-373284e7ec27", 00:22:49.262 "strip_size_kb": 64, 00:22:49.262 "state": "configuring", 00:22:49.262 "raid_level": "raid5f", 00:22:49.262 "superblock": true, 00:22:49.262 "num_base_bdevs": 4, 00:22:49.262 "num_base_bdevs_discovered": 3, 00:22:49.262 "num_base_bdevs_operational": 4, 00:22:49.262 "base_bdevs_list": [ 00:22:49.262 { 00:22:49.262 "name": null, 00:22:49.262 "uuid": "5f006e48-6a14-4425-a1ef-c714d5e24b20", 00:22:49.262 "is_configured": false, 00:22:49.262 "data_offset": 0, 00:22:49.262 "data_size": 63488 00:22:49.262 }, 00:22:49.262 { 00:22:49.262 "name": "BaseBdev2", 00:22:49.262 "uuid": "7c7d8b54-9ef5-4eac-8731-7a84375adf95", 00:22:49.262 "is_configured": true, 00:22:49.262 "data_offset": 2048, 00:22:49.262 "data_size": 63488 00:22:49.262 }, 00:22:49.262 { 00:22:49.262 "name": "BaseBdev3", 00:22:49.262 "uuid": "cec8000f-d3f1-4eac-bd62-69ff6c0e695e", 00:22:49.262 "is_configured": true, 00:22:49.262 "data_offset": 2048, 00:22:49.262 "data_size": 63488 00:22:49.262 }, 00:22:49.262 { 00:22:49.262 "name": "BaseBdev4", 00:22:49.262 "uuid": "5e9679d4-1a6a-408e-a36f-2c25aec03d68", 00:22:49.262 "is_configured": true, 00:22:49.262 "data_offset": 2048, 00:22:49.262 "data_size": 63488 00:22:49.262 } 00:22:49.262 ] 00:22:49.262 }' 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.262 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5f006e48-6a14-4425-a1ef-c714d5e24b20 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.837 [2024-12-06 13:17:36.837974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:49.837 [2024-12-06 13:17:36.838348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:49.837 [2024-12-06 13:17:36.838367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:49.837 NewBaseBdev 00:22:49.837 [2024-12-06 13:17:36.838736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.837 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.837 [2024-12-06 13:17:36.845532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:49.837 [2024-12-06 13:17:36.845593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:49.837 [2024-12-06 13:17:36.846008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.096 [ 00:22:50.096 { 00:22:50.096 "name": "NewBaseBdev", 00:22:50.096 "aliases": [ 00:22:50.096 "5f006e48-6a14-4425-a1ef-c714d5e24b20" 00:22:50.096 ], 00:22:50.096 "product_name": "Malloc disk", 00:22:50.096 "block_size": 512, 00:22:50.096 "num_blocks": 65536, 00:22:50.096 "uuid": "5f006e48-6a14-4425-a1ef-c714d5e24b20", 00:22:50.096 "assigned_rate_limits": { 00:22:50.096 "rw_ios_per_sec": 0, 00:22:50.096 "rw_mbytes_per_sec": 0, 00:22:50.096 "r_mbytes_per_sec": 0, 00:22:50.096 "w_mbytes_per_sec": 0 00:22:50.096 }, 00:22:50.096 "claimed": true, 00:22:50.096 "claim_type": "exclusive_write", 00:22:50.096 "zoned": false, 00:22:50.096 "supported_io_types": { 00:22:50.096 "read": true, 00:22:50.096 "write": true, 00:22:50.096 "unmap": true, 00:22:50.096 "flush": true, 00:22:50.096 "reset": true, 00:22:50.096 "nvme_admin": false, 00:22:50.096 "nvme_io": false, 00:22:50.096 "nvme_io_md": false, 00:22:50.096 "write_zeroes": true, 00:22:50.096 "zcopy": true, 00:22:50.096 "get_zone_info": false, 00:22:50.096 "zone_management": false, 00:22:50.096 "zone_append": false, 00:22:50.096 "compare": false, 00:22:50.096 "compare_and_write": false, 00:22:50.096 "abort": true, 00:22:50.096 "seek_hole": false, 00:22:50.096 "seek_data": false, 00:22:50.096 "copy": true, 00:22:50.096 "nvme_iov_md": false 00:22:50.096 }, 00:22:50.096 "memory_domains": [ 00:22:50.096 { 00:22:50.096 "dma_device_id": "system", 00:22:50.096 "dma_device_type": 1 00:22:50.096 }, 00:22:50.096 { 00:22:50.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.096 "dma_device_type": 2 00:22:50.096 } 00:22:50.096 ], 00:22:50.096 "driver_specific": {} 00:22:50.096 } 00:22:50.096 ] 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.096 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.096 "name": "Existed_Raid", 00:22:50.096 "uuid": "2c2cd111-e1bb-49d1-a6d7-373284e7ec27", 00:22:50.096 "strip_size_kb": 64, 00:22:50.096 "state": "online", 00:22:50.096 "raid_level": "raid5f", 00:22:50.096 "superblock": true, 00:22:50.096 "num_base_bdevs": 4, 00:22:50.096 "num_base_bdevs_discovered": 4, 00:22:50.096 "num_base_bdevs_operational": 4, 00:22:50.096 "base_bdevs_list": [ 00:22:50.096 { 00:22:50.096 "name": "NewBaseBdev", 00:22:50.096 "uuid": "5f006e48-6a14-4425-a1ef-c714d5e24b20", 00:22:50.096 "is_configured": true, 00:22:50.096 "data_offset": 2048, 00:22:50.096 "data_size": 63488 00:22:50.096 }, 00:22:50.096 { 00:22:50.096 "name": "BaseBdev2", 00:22:50.096 "uuid": "7c7d8b54-9ef5-4eac-8731-7a84375adf95", 00:22:50.096 "is_configured": true, 00:22:50.096 "data_offset": 2048, 00:22:50.096 "data_size": 63488 00:22:50.096 }, 00:22:50.096 { 00:22:50.096 "name": "BaseBdev3", 00:22:50.096 "uuid": "cec8000f-d3f1-4eac-bd62-69ff6c0e695e", 00:22:50.096 "is_configured": true, 00:22:50.096 "data_offset": 2048, 00:22:50.096 "data_size": 63488 00:22:50.096 }, 00:22:50.096 { 00:22:50.096 "name": "BaseBdev4", 00:22:50.096 "uuid": "5e9679d4-1a6a-408e-a36f-2c25aec03d68", 00:22:50.096 "is_configured": true, 00:22:50.097 "data_offset": 2048, 00:22:50.097 "data_size": 63488 00:22:50.097 } 00:22:50.097 ] 00:22:50.097 }' 00:22:50.097 13:17:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.097 13:17:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.664 [2024-12-06 13:17:37.386861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:50.664 "name": "Existed_Raid", 00:22:50.664 "aliases": [ 00:22:50.664 "2c2cd111-e1bb-49d1-a6d7-373284e7ec27" 00:22:50.664 ], 00:22:50.664 "product_name": "Raid Volume", 00:22:50.664 "block_size": 512, 00:22:50.664 "num_blocks": 190464, 00:22:50.664 "uuid": "2c2cd111-e1bb-49d1-a6d7-373284e7ec27", 00:22:50.664 "assigned_rate_limits": { 00:22:50.664 "rw_ios_per_sec": 0, 00:22:50.664 "rw_mbytes_per_sec": 0, 00:22:50.664 "r_mbytes_per_sec": 0, 00:22:50.664 "w_mbytes_per_sec": 0 00:22:50.664 }, 00:22:50.664 "claimed": false, 00:22:50.664 "zoned": false, 00:22:50.664 "supported_io_types": { 00:22:50.664 "read": true, 00:22:50.664 "write": true, 00:22:50.664 "unmap": false, 00:22:50.664 "flush": false, 00:22:50.664 "reset": true, 00:22:50.664 "nvme_admin": false, 00:22:50.664 "nvme_io": false, 00:22:50.664 "nvme_io_md": false, 00:22:50.664 "write_zeroes": true, 00:22:50.664 "zcopy": false, 00:22:50.664 "get_zone_info": false, 00:22:50.664 "zone_management": false, 00:22:50.664 "zone_append": false, 00:22:50.664 "compare": false, 00:22:50.664 "compare_and_write": false, 00:22:50.664 "abort": false, 00:22:50.664 "seek_hole": false, 00:22:50.664 "seek_data": false, 00:22:50.664 "copy": false, 00:22:50.664 "nvme_iov_md": false 00:22:50.664 }, 00:22:50.664 "driver_specific": { 00:22:50.664 "raid": { 00:22:50.664 "uuid": "2c2cd111-e1bb-49d1-a6d7-373284e7ec27", 00:22:50.664 "strip_size_kb": 64, 00:22:50.664 "state": "online", 00:22:50.664 "raid_level": "raid5f", 00:22:50.664 "superblock": true, 00:22:50.664 "num_base_bdevs": 4, 00:22:50.664 "num_base_bdevs_discovered": 4, 00:22:50.664 "num_base_bdevs_operational": 4, 00:22:50.664 "base_bdevs_list": [ 00:22:50.664 { 00:22:50.664 "name": "NewBaseBdev", 00:22:50.664 "uuid": "5f006e48-6a14-4425-a1ef-c714d5e24b20", 00:22:50.664 "is_configured": true, 00:22:50.664 "data_offset": 2048, 00:22:50.664 "data_size": 63488 00:22:50.664 }, 00:22:50.664 { 00:22:50.664 "name": "BaseBdev2", 00:22:50.664 "uuid": "7c7d8b54-9ef5-4eac-8731-7a84375adf95", 00:22:50.664 "is_configured": true, 00:22:50.664 "data_offset": 2048, 00:22:50.664 "data_size": 63488 00:22:50.664 }, 00:22:50.664 { 00:22:50.664 "name": "BaseBdev3", 00:22:50.664 "uuid": "cec8000f-d3f1-4eac-bd62-69ff6c0e695e", 00:22:50.664 "is_configured": true, 00:22:50.664 "data_offset": 2048, 00:22:50.664 "data_size": 63488 00:22:50.664 }, 00:22:50.664 { 00:22:50.664 "name": "BaseBdev4", 00:22:50.664 "uuid": "5e9679d4-1a6a-408e-a36f-2c25aec03d68", 00:22:50.664 "is_configured": true, 00:22:50.664 "data_offset": 2048, 00:22:50.664 "data_size": 63488 00:22:50.664 } 00:22:50.664 ] 00:22:50.664 } 00:22:50.664 } 00:22:50.664 }' 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:50.664 BaseBdev2 00:22:50.664 BaseBdev3 00:22:50.664 BaseBdev4' 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.664 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.665 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.924 [2024-12-06 13:17:37.746706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:50.924 [2024-12-06 13:17:37.746766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:50.924 [2024-12-06 13:17:37.746953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:50.924 [2024-12-06 13:17:37.747414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:50.924 [2024-12-06 13:17:37.747445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84197 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84197 ']' 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84197 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84197 00:22:50.924 killing process with pid 84197 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84197' 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84197 00:22:50.924 [2024-12-06 13:17:37.784012] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:50.924 13:17:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84197 00:22:51.183 [2024-12-06 13:17:38.180128] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:52.561 13:17:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:52.561 00:22:52.561 real 0m13.016s 00:22:52.561 user 0m21.265s 00:22:52.561 sys 0m1.971s 00:22:52.561 13:17:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.561 13:17:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.561 ************************************ 00:22:52.561 END TEST raid5f_state_function_test_sb 00:22:52.561 ************************************ 00:22:52.561 13:17:39 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:22:52.561 13:17:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:52.561 13:17:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.561 13:17:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:52.561 ************************************ 00:22:52.561 START TEST raid5f_superblock_test 00:22:52.561 ************************************ 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84879 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84879 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84879 ']' 00:22:52.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.561 13:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.561 [2024-12-06 13:17:39.537060] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:22:52.561 [2024-12-06 13:17:39.537282] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84879 ] 00:22:52.898 [2024-12-06 13:17:39.725643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.158 [2024-12-06 13:17:39.880481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.158 [2024-12-06 13:17:40.109253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:53.158 [2024-12-06 13:17:40.109369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:53.727 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.727 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:22:53.727 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:53.727 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:53.727 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.728 malloc1 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.728 [2024-12-06 13:17:40.529345] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:53.728 [2024-12-06 13:17:40.529771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.728 [2024-12-06 13:17:40.529961] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:53.728 [2024-12-06 13:17:40.530110] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.728 [2024-12-06 13:17:40.533169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.728 [2024-12-06 13:17:40.533365] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:53.728 pt1 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.728 malloc2 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.728 [2024-12-06 13:17:40.591136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:53.728 [2024-12-06 13:17:40.591236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.728 [2024-12-06 13:17:40.591274] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:53.728 [2024-12-06 13:17:40.591292] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.728 [2024-12-06 13:17:40.594516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.728 [2024-12-06 13:17:40.594572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:53.728 pt2 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.728 malloc3 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.728 [2024-12-06 13:17:40.662063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:53.728 [2024-12-06 13:17:40.662433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.728 [2024-12-06 13:17:40.662534] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:53.728 [2024-12-06 13:17:40.662663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.728 [2024-12-06 13:17:40.665937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.728 [2024-12-06 13:17:40.666113] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:53.728 pt3 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.728 malloc4 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.728 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.728 [2024-12-06 13:17:40.725369] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:53.728 [2024-12-06 13:17:40.725502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.728 [2024-12-06 13:17:40.725541] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:53.728 [2024-12-06 13:17:40.725559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.728 [2024-12-06 13:17:40.728668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.728 [2024-12-06 13:17:40.728716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:53.728 pt4 00:22:53.729 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.729 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:53.729 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:53.729 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:22:53.729 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.729 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.729 [2024-12-06 13:17:40.737454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:53.729 [2024-12-06 13:17:40.740166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:53.988 [2024-12-06 13:17:40.740431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:53.988 [2024-12-06 13:17:40.740535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:53.988 [2024-12-06 13:17:40.740813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:53.988 [2024-12-06 13:17:40.740838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:53.988 [2024-12-06 13:17:40.741185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:53.988 [2024-12-06 13:17:40.748226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:53.988 [2024-12-06 13:17:40.748391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:53.988 [2024-12-06 13:17:40.748861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.988 "name": "raid_bdev1", 00:22:53.988 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:53.988 "strip_size_kb": 64, 00:22:53.988 "state": "online", 00:22:53.988 "raid_level": "raid5f", 00:22:53.988 "superblock": true, 00:22:53.988 "num_base_bdevs": 4, 00:22:53.988 "num_base_bdevs_discovered": 4, 00:22:53.988 "num_base_bdevs_operational": 4, 00:22:53.988 "base_bdevs_list": [ 00:22:53.988 { 00:22:53.988 "name": "pt1", 00:22:53.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:53.988 "is_configured": true, 00:22:53.988 "data_offset": 2048, 00:22:53.988 "data_size": 63488 00:22:53.988 }, 00:22:53.988 { 00:22:53.988 "name": "pt2", 00:22:53.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:53.988 "is_configured": true, 00:22:53.988 "data_offset": 2048, 00:22:53.988 "data_size": 63488 00:22:53.988 }, 00:22:53.988 { 00:22:53.988 "name": "pt3", 00:22:53.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:53.988 "is_configured": true, 00:22:53.988 "data_offset": 2048, 00:22:53.988 "data_size": 63488 00:22:53.988 }, 00:22:53.988 { 00:22:53.988 "name": "pt4", 00:22:53.988 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:53.988 "is_configured": true, 00:22:53.988 "data_offset": 2048, 00:22:53.988 "data_size": 63488 00:22:53.988 } 00:22:53.988 ] 00:22:53.988 }' 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.988 13:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.556 [2024-12-06 13:17:41.281487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.556 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.556 "name": "raid_bdev1", 00:22:54.556 "aliases": [ 00:22:54.556 "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d" 00:22:54.556 ], 00:22:54.556 "product_name": "Raid Volume", 00:22:54.556 "block_size": 512, 00:22:54.556 "num_blocks": 190464, 00:22:54.556 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:54.556 "assigned_rate_limits": { 00:22:54.556 "rw_ios_per_sec": 0, 00:22:54.556 "rw_mbytes_per_sec": 0, 00:22:54.556 "r_mbytes_per_sec": 0, 00:22:54.557 "w_mbytes_per_sec": 0 00:22:54.557 }, 00:22:54.557 "claimed": false, 00:22:54.557 "zoned": false, 00:22:54.557 "supported_io_types": { 00:22:54.557 "read": true, 00:22:54.557 "write": true, 00:22:54.557 "unmap": false, 00:22:54.557 "flush": false, 00:22:54.557 "reset": true, 00:22:54.557 "nvme_admin": false, 00:22:54.557 "nvme_io": false, 00:22:54.557 "nvme_io_md": false, 00:22:54.557 "write_zeroes": true, 00:22:54.557 "zcopy": false, 00:22:54.557 "get_zone_info": false, 00:22:54.557 "zone_management": false, 00:22:54.557 "zone_append": false, 00:22:54.557 "compare": false, 00:22:54.557 "compare_and_write": false, 00:22:54.557 "abort": false, 00:22:54.557 "seek_hole": false, 00:22:54.557 "seek_data": false, 00:22:54.557 "copy": false, 00:22:54.557 "nvme_iov_md": false 00:22:54.557 }, 00:22:54.557 "driver_specific": { 00:22:54.557 "raid": { 00:22:54.557 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:54.557 "strip_size_kb": 64, 00:22:54.557 "state": "online", 00:22:54.557 "raid_level": "raid5f", 00:22:54.557 "superblock": true, 00:22:54.557 "num_base_bdevs": 4, 00:22:54.557 "num_base_bdevs_discovered": 4, 00:22:54.557 "num_base_bdevs_operational": 4, 00:22:54.557 "base_bdevs_list": [ 00:22:54.557 { 00:22:54.557 "name": "pt1", 00:22:54.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:54.557 "is_configured": true, 00:22:54.557 "data_offset": 2048, 00:22:54.557 "data_size": 63488 00:22:54.557 }, 00:22:54.557 { 00:22:54.557 "name": "pt2", 00:22:54.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:54.557 "is_configured": true, 00:22:54.557 "data_offset": 2048, 00:22:54.557 "data_size": 63488 00:22:54.557 }, 00:22:54.557 { 00:22:54.557 "name": "pt3", 00:22:54.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:54.557 "is_configured": true, 00:22:54.557 "data_offset": 2048, 00:22:54.557 "data_size": 63488 00:22:54.557 }, 00:22:54.557 { 00:22:54.557 "name": "pt4", 00:22:54.557 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:54.557 "is_configured": true, 00:22:54.557 "data_offset": 2048, 00:22:54.557 "data_size": 63488 00:22:54.557 } 00:22:54.557 ] 00:22:54.557 } 00:22:54.557 } 00:22:54.557 }' 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:54.557 pt2 00:22:54.557 pt3 00:22:54.557 pt4' 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.557 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.816 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.816 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.816 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.816 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.817 [2024-12-06 13:17:41.689533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=89fe5e7a-0aaf-4fa4-b51d-d4c82262688d 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 89fe5e7a-0aaf-4fa4-b51d-d4c82262688d ']' 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.817 [2024-12-06 13:17:41.737308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:54.817 [2024-12-06 13:17:41.737602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:54.817 [2024-12-06 13:17:41.737766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.817 [2024-12-06 13:17:41.737898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.817 [2024-12-06 13:17:41.737927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.817 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.076 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.077 [2024-12-06 13:17:41.901402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:55.077 [2024-12-06 13:17:41.904194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:55.077 [2024-12-06 13:17:41.904276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:55.077 [2024-12-06 13:17:41.904334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:55.077 [2024-12-06 13:17:41.904419] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:55.077 [2024-12-06 13:17:41.904520] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:55.077 [2024-12-06 13:17:41.904556] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:55.077 [2024-12-06 13:17:41.904588] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:55.077 [2024-12-06 13:17:41.904616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.077 [2024-12-06 13:17:41.904635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:55.077 request: 00:22:55.077 { 00:22:55.077 "name": "raid_bdev1", 00:22:55.077 "raid_level": "raid5f", 00:22:55.077 "base_bdevs": [ 00:22:55.077 "malloc1", 00:22:55.077 "malloc2", 00:22:55.077 "malloc3", 00:22:55.077 "malloc4" 00:22:55.077 ], 00:22:55.077 "strip_size_kb": 64, 00:22:55.077 "superblock": false, 00:22:55.077 "method": "bdev_raid_create", 00:22:55.077 "req_id": 1 00:22:55.077 } 00:22:55.077 Got JSON-RPC error response 00:22:55.077 response: 00:22:55.077 { 00:22:55.077 "code": -17, 00:22:55.077 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:55.077 } 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.077 [2024-12-06 13:17:41.977442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:55.077 [2024-12-06 13:17:41.977584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.077 [2024-12-06 13:17:41.977617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:55.077 [2024-12-06 13:17:41.977641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.077 [2024-12-06 13:17:41.980830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.077 [2024-12-06 13:17:41.980902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:55.077 [2024-12-06 13:17:41.981028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:55.077 [2024-12-06 13:17:41.981111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:55.077 pt1 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.077 13:17:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.077 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.077 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.077 "name": "raid_bdev1", 00:22:55.077 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:55.077 "strip_size_kb": 64, 00:22:55.077 "state": "configuring", 00:22:55.077 "raid_level": "raid5f", 00:22:55.077 "superblock": true, 00:22:55.077 "num_base_bdevs": 4, 00:22:55.077 "num_base_bdevs_discovered": 1, 00:22:55.077 "num_base_bdevs_operational": 4, 00:22:55.077 "base_bdevs_list": [ 00:22:55.077 { 00:22:55.077 "name": "pt1", 00:22:55.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:55.077 "is_configured": true, 00:22:55.077 "data_offset": 2048, 00:22:55.077 "data_size": 63488 00:22:55.077 }, 00:22:55.077 { 00:22:55.077 "name": null, 00:22:55.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:55.077 "is_configured": false, 00:22:55.077 "data_offset": 2048, 00:22:55.077 "data_size": 63488 00:22:55.077 }, 00:22:55.077 { 00:22:55.077 "name": null, 00:22:55.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:55.077 "is_configured": false, 00:22:55.077 "data_offset": 2048, 00:22:55.077 "data_size": 63488 00:22:55.077 }, 00:22:55.077 { 00:22:55.077 "name": null, 00:22:55.077 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:55.077 "is_configured": false, 00:22:55.077 "data_offset": 2048, 00:22:55.077 "data_size": 63488 00:22:55.077 } 00:22:55.077 ] 00:22:55.077 }' 00:22:55.077 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.077 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.645 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:22:55.645 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:55.645 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.645 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.645 [2024-12-06 13:17:42.497599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:55.645 [2024-12-06 13:17:42.497731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.645 [2024-12-06 13:17:42.497776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:55.645 [2024-12-06 13:17:42.497796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.645 [2024-12-06 13:17:42.498453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.645 [2024-12-06 13:17:42.498523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:55.646 [2024-12-06 13:17:42.498644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:55.646 [2024-12-06 13:17:42.498688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:55.646 pt2 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.646 [2024-12-06 13:17:42.505560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.646 "name": "raid_bdev1", 00:22:55.646 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:55.646 "strip_size_kb": 64, 00:22:55.646 "state": "configuring", 00:22:55.646 "raid_level": "raid5f", 00:22:55.646 "superblock": true, 00:22:55.646 "num_base_bdevs": 4, 00:22:55.646 "num_base_bdevs_discovered": 1, 00:22:55.646 "num_base_bdevs_operational": 4, 00:22:55.646 "base_bdevs_list": [ 00:22:55.646 { 00:22:55.646 "name": "pt1", 00:22:55.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:55.646 "is_configured": true, 00:22:55.646 "data_offset": 2048, 00:22:55.646 "data_size": 63488 00:22:55.646 }, 00:22:55.646 { 00:22:55.646 "name": null, 00:22:55.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:55.646 "is_configured": false, 00:22:55.646 "data_offset": 0, 00:22:55.646 "data_size": 63488 00:22:55.646 }, 00:22:55.646 { 00:22:55.646 "name": null, 00:22:55.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:55.646 "is_configured": false, 00:22:55.646 "data_offset": 2048, 00:22:55.646 "data_size": 63488 00:22:55.646 }, 00:22:55.646 { 00:22:55.646 "name": null, 00:22:55.646 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:55.646 "is_configured": false, 00:22:55.646 "data_offset": 2048, 00:22:55.646 "data_size": 63488 00:22:55.646 } 00:22:55.646 ] 00:22:55.646 }' 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.646 13:17:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.215 [2024-12-06 13:17:43.049747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:56.215 [2024-12-06 13:17:43.049868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.215 [2024-12-06 13:17:43.049905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:56.215 [2024-12-06 13:17:43.049922] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.215 [2024-12-06 13:17:43.050624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.215 [2024-12-06 13:17:43.050659] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:56.215 [2024-12-06 13:17:43.050784] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:56.215 [2024-12-06 13:17:43.050820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:56.215 pt2 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.215 [2024-12-06 13:17:43.057672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:56.215 [2024-12-06 13:17:43.057737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.215 [2024-12-06 13:17:43.057768] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:56.215 [2024-12-06 13:17:43.057784] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.215 [2024-12-06 13:17:43.058313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.215 [2024-12-06 13:17:43.058347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:56.215 [2024-12-06 13:17:43.058445] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:56.215 [2024-12-06 13:17:43.058513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:56.215 pt3 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.215 [2024-12-06 13:17:43.069661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:56.215 [2024-12-06 13:17:43.069728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:56.215 [2024-12-06 13:17:43.069758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:56.215 [2024-12-06 13:17:43.069781] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:56.215 [2024-12-06 13:17:43.070365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:56.215 [2024-12-06 13:17:43.070400] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:56.215 [2024-12-06 13:17:43.070522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:56.215 [2024-12-06 13:17:43.070562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:56.215 [2024-12-06 13:17:43.070762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:56.215 [2024-12-06 13:17:43.070778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:56.215 [2024-12-06 13:17:43.071113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:56.215 [2024-12-06 13:17:43.077724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:56.215 [2024-12-06 13:17:43.077767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:56.215 [2024-12-06 13:17:43.078044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:56.215 pt4 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:56.215 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:56.216 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:56.216 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:56.216 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.216 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.216 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.216 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.216 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.216 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:56.216 "name": "raid_bdev1", 00:22:56.216 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:56.216 "strip_size_kb": 64, 00:22:56.216 "state": "online", 00:22:56.216 "raid_level": "raid5f", 00:22:56.216 "superblock": true, 00:22:56.216 "num_base_bdevs": 4, 00:22:56.216 "num_base_bdevs_discovered": 4, 00:22:56.216 "num_base_bdevs_operational": 4, 00:22:56.216 "base_bdevs_list": [ 00:22:56.216 { 00:22:56.216 "name": "pt1", 00:22:56.216 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:56.216 "is_configured": true, 00:22:56.216 "data_offset": 2048, 00:22:56.216 "data_size": 63488 00:22:56.216 }, 00:22:56.216 { 00:22:56.216 "name": "pt2", 00:22:56.216 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:56.216 "is_configured": true, 00:22:56.216 "data_offset": 2048, 00:22:56.216 "data_size": 63488 00:22:56.216 }, 00:22:56.216 { 00:22:56.216 "name": "pt3", 00:22:56.216 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:56.216 "is_configured": true, 00:22:56.216 "data_offset": 2048, 00:22:56.216 "data_size": 63488 00:22:56.216 }, 00:22:56.216 { 00:22:56.216 "name": "pt4", 00:22:56.216 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:56.216 "is_configured": true, 00:22:56.216 "data_offset": 2048, 00:22:56.216 "data_size": 63488 00:22:56.216 } 00:22:56.216 ] 00:22:56.216 }' 00:22:56.216 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:56.216 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.784 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:56.784 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:56.784 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:56.784 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:56.784 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:56.784 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:56.784 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:56.784 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:56.784 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.784 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.784 [2024-12-06 13:17:43.578513] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:56.785 "name": "raid_bdev1", 00:22:56.785 "aliases": [ 00:22:56.785 "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d" 00:22:56.785 ], 00:22:56.785 "product_name": "Raid Volume", 00:22:56.785 "block_size": 512, 00:22:56.785 "num_blocks": 190464, 00:22:56.785 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:56.785 "assigned_rate_limits": { 00:22:56.785 "rw_ios_per_sec": 0, 00:22:56.785 "rw_mbytes_per_sec": 0, 00:22:56.785 "r_mbytes_per_sec": 0, 00:22:56.785 "w_mbytes_per_sec": 0 00:22:56.785 }, 00:22:56.785 "claimed": false, 00:22:56.785 "zoned": false, 00:22:56.785 "supported_io_types": { 00:22:56.785 "read": true, 00:22:56.785 "write": true, 00:22:56.785 "unmap": false, 00:22:56.785 "flush": false, 00:22:56.785 "reset": true, 00:22:56.785 "nvme_admin": false, 00:22:56.785 "nvme_io": false, 00:22:56.785 "nvme_io_md": false, 00:22:56.785 "write_zeroes": true, 00:22:56.785 "zcopy": false, 00:22:56.785 "get_zone_info": false, 00:22:56.785 "zone_management": false, 00:22:56.785 "zone_append": false, 00:22:56.785 "compare": false, 00:22:56.785 "compare_and_write": false, 00:22:56.785 "abort": false, 00:22:56.785 "seek_hole": false, 00:22:56.785 "seek_data": false, 00:22:56.785 "copy": false, 00:22:56.785 "nvme_iov_md": false 00:22:56.785 }, 00:22:56.785 "driver_specific": { 00:22:56.785 "raid": { 00:22:56.785 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:56.785 "strip_size_kb": 64, 00:22:56.785 "state": "online", 00:22:56.785 "raid_level": "raid5f", 00:22:56.785 "superblock": true, 00:22:56.785 "num_base_bdevs": 4, 00:22:56.785 "num_base_bdevs_discovered": 4, 00:22:56.785 "num_base_bdevs_operational": 4, 00:22:56.785 "base_bdevs_list": [ 00:22:56.785 { 00:22:56.785 "name": "pt1", 00:22:56.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:56.785 "is_configured": true, 00:22:56.785 "data_offset": 2048, 00:22:56.785 "data_size": 63488 00:22:56.785 }, 00:22:56.785 { 00:22:56.785 "name": "pt2", 00:22:56.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:56.785 "is_configured": true, 00:22:56.785 "data_offset": 2048, 00:22:56.785 "data_size": 63488 00:22:56.785 }, 00:22:56.785 { 00:22:56.785 "name": "pt3", 00:22:56.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:56.785 "is_configured": true, 00:22:56.785 "data_offset": 2048, 00:22:56.785 "data_size": 63488 00:22:56.785 }, 00:22:56.785 { 00:22:56.785 "name": "pt4", 00:22:56.785 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:56.785 "is_configured": true, 00:22:56.785 "data_offset": 2048, 00:22:56.785 "data_size": 63488 00:22:56.785 } 00:22:56.785 ] 00:22:56.785 } 00:22:56.785 } 00:22:56.785 }' 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:56.785 pt2 00:22:56.785 pt3 00:22:56.785 pt4' 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.785 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:57.045 [2024-12-06 13:17:43.934592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 89fe5e7a-0aaf-4fa4-b51d-d4c82262688d '!=' 89fe5e7a-0aaf-4fa4-b51d-d4c82262688d ']' 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.045 [2024-12-06 13:17:43.982433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:57.045 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.046 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.046 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.046 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.046 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.046 13:17:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.046 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.046 13:17:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.046 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.046 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.046 "name": "raid_bdev1", 00:22:57.046 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:57.046 "strip_size_kb": 64, 00:22:57.046 "state": "online", 00:22:57.046 "raid_level": "raid5f", 00:22:57.046 "superblock": true, 00:22:57.046 "num_base_bdevs": 4, 00:22:57.046 "num_base_bdevs_discovered": 3, 00:22:57.046 "num_base_bdevs_operational": 3, 00:22:57.046 "base_bdevs_list": [ 00:22:57.046 { 00:22:57.046 "name": null, 00:22:57.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.046 "is_configured": false, 00:22:57.046 "data_offset": 0, 00:22:57.046 "data_size": 63488 00:22:57.046 }, 00:22:57.046 { 00:22:57.046 "name": "pt2", 00:22:57.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:57.046 "is_configured": true, 00:22:57.046 "data_offset": 2048, 00:22:57.046 "data_size": 63488 00:22:57.046 }, 00:22:57.046 { 00:22:57.046 "name": "pt3", 00:22:57.046 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:57.046 "is_configured": true, 00:22:57.046 "data_offset": 2048, 00:22:57.046 "data_size": 63488 00:22:57.046 }, 00:22:57.046 { 00:22:57.046 "name": "pt4", 00:22:57.046 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:57.046 "is_configured": true, 00:22:57.046 "data_offset": 2048, 00:22:57.046 "data_size": 63488 00:22:57.046 } 00:22:57.046 ] 00:22:57.046 }' 00:22:57.046 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.046 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.615 [2024-12-06 13:17:44.506512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:57.615 [2024-12-06 13:17:44.506562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:57.615 [2024-12-06 13:17:44.506684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:57.615 [2024-12-06 13:17:44.506803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:57.615 [2024-12-06 13:17:44.506821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.615 [2024-12-06 13:17:44.594492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:57.615 [2024-12-06 13:17:44.594573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.615 [2024-12-06 13:17:44.594606] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:57.615 [2024-12-06 13:17:44.594623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.615 [2024-12-06 13:17:44.597896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.615 [2024-12-06 13:17:44.597943] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:57.615 [2024-12-06 13:17:44.598069] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:57.615 [2024-12-06 13:17:44.598139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:57.615 pt2 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.615 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.616 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.616 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.616 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.616 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.616 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.616 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.616 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.875 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.875 "name": "raid_bdev1", 00:22:57.875 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:57.875 "strip_size_kb": 64, 00:22:57.875 "state": "configuring", 00:22:57.875 "raid_level": "raid5f", 00:22:57.875 "superblock": true, 00:22:57.875 "num_base_bdevs": 4, 00:22:57.875 "num_base_bdevs_discovered": 1, 00:22:57.875 "num_base_bdevs_operational": 3, 00:22:57.875 "base_bdevs_list": [ 00:22:57.875 { 00:22:57.875 "name": null, 00:22:57.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.875 "is_configured": false, 00:22:57.875 "data_offset": 2048, 00:22:57.875 "data_size": 63488 00:22:57.875 }, 00:22:57.875 { 00:22:57.875 "name": "pt2", 00:22:57.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:57.875 "is_configured": true, 00:22:57.875 "data_offset": 2048, 00:22:57.875 "data_size": 63488 00:22:57.875 }, 00:22:57.875 { 00:22:57.875 "name": null, 00:22:57.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:57.875 "is_configured": false, 00:22:57.875 "data_offset": 2048, 00:22:57.875 "data_size": 63488 00:22:57.875 }, 00:22:57.875 { 00:22:57.875 "name": null, 00:22:57.875 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:57.875 "is_configured": false, 00:22:57.875 "data_offset": 2048, 00:22:57.875 "data_size": 63488 00:22:57.875 } 00:22:57.875 ] 00:22:57.875 }' 00:22:57.875 13:17:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.875 13:17:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.135 [2024-12-06 13:17:45.110671] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:58.135 [2024-12-06 13:17:45.110797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.135 [2024-12-06 13:17:45.110842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:58.135 [2024-12-06 13:17:45.110858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.135 [2024-12-06 13:17:45.111536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.135 [2024-12-06 13:17:45.111577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:58.135 [2024-12-06 13:17:45.111705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:58.135 [2024-12-06 13:17:45.111750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:58.135 pt3 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.135 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.393 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.393 "name": "raid_bdev1", 00:22:58.393 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:58.393 "strip_size_kb": 64, 00:22:58.393 "state": "configuring", 00:22:58.393 "raid_level": "raid5f", 00:22:58.393 "superblock": true, 00:22:58.393 "num_base_bdevs": 4, 00:22:58.393 "num_base_bdevs_discovered": 2, 00:22:58.393 "num_base_bdevs_operational": 3, 00:22:58.393 "base_bdevs_list": [ 00:22:58.393 { 00:22:58.393 "name": null, 00:22:58.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.393 "is_configured": false, 00:22:58.393 "data_offset": 2048, 00:22:58.393 "data_size": 63488 00:22:58.393 }, 00:22:58.393 { 00:22:58.393 "name": "pt2", 00:22:58.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.393 "is_configured": true, 00:22:58.393 "data_offset": 2048, 00:22:58.393 "data_size": 63488 00:22:58.393 }, 00:22:58.393 { 00:22:58.393 "name": "pt3", 00:22:58.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:58.393 "is_configured": true, 00:22:58.393 "data_offset": 2048, 00:22:58.393 "data_size": 63488 00:22:58.393 }, 00:22:58.393 { 00:22:58.393 "name": null, 00:22:58.393 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:58.393 "is_configured": false, 00:22:58.393 "data_offset": 2048, 00:22:58.393 "data_size": 63488 00:22:58.393 } 00:22:58.393 ] 00:22:58.393 }' 00:22:58.393 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.393 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.651 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:58.651 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:58.651 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:22:58.651 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:58.651 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.651 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.651 [2024-12-06 13:17:45.618900] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:58.651 [2024-12-06 13:17:45.619036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.651 [2024-12-06 13:17:45.619087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:58.651 [2024-12-06 13:17:45.619110] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.651 [2024-12-06 13:17:45.619963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.651 [2024-12-06 13:17:45.620018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:58.651 [2024-12-06 13:17:45.620174] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:58.651 [2024-12-06 13:17:45.620232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:58.651 [2024-12-06 13:17:45.620490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:58.651 [2024-12-06 13:17:45.620522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:58.651 [2024-12-06 13:17:45.620945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:58.651 [2024-12-06 13:17:45.629353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:58.651 [2024-12-06 13:17:45.629397] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:58.651 [2024-12-06 13:17:45.629821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.651 pt4 00:22:58.651 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.651 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:58.651 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.651 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:58.651 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:58.652 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:58.652 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:58.652 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.652 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.652 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.652 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.652 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.652 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.652 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.652 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.652 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.910 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.910 "name": "raid_bdev1", 00:22:58.910 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:58.910 "strip_size_kb": 64, 00:22:58.910 "state": "online", 00:22:58.910 "raid_level": "raid5f", 00:22:58.910 "superblock": true, 00:22:58.910 "num_base_bdevs": 4, 00:22:58.910 "num_base_bdevs_discovered": 3, 00:22:58.910 "num_base_bdevs_operational": 3, 00:22:58.910 "base_bdevs_list": [ 00:22:58.910 { 00:22:58.910 "name": null, 00:22:58.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.910 "is_configured": false, 00:22:58.910 "data_offset": 2048, 00:22:58.910 "data_size": 63488 00:22:58.910 }, 00:22:58.910 { 00:22:58.910 "name": "pt2", 00:22:58.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.910 "is_configured": true, 00:22:58.910 "data_offset": 2048, 00:22:58.910 "data_size": 63488 00:22:58.910 }, 00:22:58.910 { 00:22:58.910 "name": "pt3", 00:22:58.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:58.910 "is_configured": true, 00:22:58.910 "data_offset": 2048, 00:22:58.910 "data_size": 63488 00:22:58.910 }, 00:22:58.910 { 00:22:58.910 "name": "pt4", 00:22:58.910 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:58.910 "is_configured": true, 00:22:58.910 "data_offset": 2048, 00:22:58.910 "data_size": 63488 00:22:58.910 } 00:22:58.910 ] 00:22:58.910 }' 00:22:58.910 13:17:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.910 13:17:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.168 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:59.168 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.168 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.168 [2024-12-06 13:17:46.150851] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:59.168 [2024-12-06 13:17:46.150904] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:59.168 [2024-12-06 13:17:46.151016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.168 [2024-12-06 13:17:46.151120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.168 [2024-12-06 13:17:46.151151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:59.168 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.168 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:59.168 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.168 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.168 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.168 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.426 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:59.426 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:59.426 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:22:59.426 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:22:59.426 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:22:59.426 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.426 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.426 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.427 [2024-12-06 13:17:46.210848] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:59.427 [2024-12-06 13:17:46.210953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.427 [2024-12-06 13:17:46.210995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:59.427 [2024-12-06 13:17:46.211014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.427 [2024-12-06 13:17:46.213926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.427 [2024-12-06 13:17:46.213975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:59.427 [2024-12-06 13:17:46.214086] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:59.427 [2024-12-06 13:17:46.214153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:59.427 [2024-12-06 13:17:46.214327] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:59.427 [2024-12-06 13:17:46.214352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:59.427 [2024-12-06 13:17:46.214375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:59.427 [2024-12-06 13:17:46.214453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:59.427 [2024-12-06 13:17:46.214611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:59.427 pt1 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.427 "name": "raid_bdev1", 00:22:59.427 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:59.427 "strip_size_kb": 64, 00:22:59.427 "state": "configuring", 00:22:59.427 "raid_level": "raid5f", 00:22:59.427 "superblock": true, 00:22:59.427 "num_base_bdevs": 4, 00:22:59.427 "num_base_bdevs_discovered": 2, 00:22:59.427 "num_base_bdevs_operational": 3, 00:22:59.427 "base_bdevs_list": [ 00:22:59.427 { 00:22:59.427 "name": null, 00:22:59.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.427 "is_configured": false, 00:22:59.427 "data_offset": 2048, 00:22:59.427 "data_size": 63488 00:22:59.427 }, 00:22:59.427 { 00:22:59.427 "name": "pt2", 00:22:59.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.427 "is_configured": true, 00:22:59.427 "data_offset": 2048, 00:22:59.427 "data_size": 63488 00:22:59.427 }, 00:22:59.427 { 00:22:59.427 "name": "pt3", 00:22:59.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.427 "is_configured": true, 00:22:59.427 "data_offset": 2048, 00:22:59.427 "data_size": 63488 00:22:59.427 }, 00:22:59.427 { 00:22:59.427 "name": null, 00:22:59.427 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:59.427 "is_configured": false, 00:22:59.427 "data_offset": 2048, 00:22:59.427 "data_size": 63488 00:22:59.427 } 00:22:59.427 ] 00:22:59.427 }' 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.427 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.998 [2024-12-06 13:17:46.787102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:59.998 [2024-12-06 13:17:46.787209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.998 [2024-12-06 13:17:46.787285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:59.998 [2024-12-06 13:17:46.787306] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.998 [2024-12-06 13:17:46.788143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.998 [2024-12-06 13:17:46.788208] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:59.998 [2024-12-06 13:17:46.788355] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:59.998 [2024-12-06 13:17:46.788399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:59.998 [2024-12-06 13:17:46.788675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:59.998 [2024-12-06 13:17:46.788711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:59.998 [2024-12-06 13:17:46.789148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:59.998 [2024-12-06 13:17:46.797817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:59.998 [2024-12-06 13:17:46.797888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:59.998 [2024-12-06 13:17:46.798329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.998 pt4 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.998 "name": "raid_bdev1", 00:22:59.998 "uuid": "89fe5e7a-0aaf-4fa4-b51d-d4c82262688d", 00:22:59.998 "strip_size_kb": 64, 00:22:59.998 "state": "online", 00:22:59.998 "raid_level": "raid5f", 00:22:59.998 "superblock": true, 00:22:59.998 "num_base_bdevs": 4, 00:22:59.998 "num_base_bdevs_discovered": 3, 00:22:59.998 "num_base_bdevs_operational": 3, 00:22:59.998 "base_bdevs_list": [ 00:22:59.998 { 00:22:59.998 "name": null, 00:22:59.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.998 "is_configured": false, 00:22:59.998 "data_offset": 2048, 00:22:59.998 "data_size": 63488 00:22:59.998 }, 00:22:59.998 { 00:22:59.998 "name": "pt2", 00:22:59.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.998 "is_configured": true, 00:22:59.998 "data_offset": 2048, 00:22:59.998 "data_size": 63488 00:22:59.998 }, 00:22:59.998 { 00:22:59.998 "name": "pt3", 00:22:59.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.998 "is_configured": true, 00:22:59.998 "data_offset": 2048, 00:22:59.998 "data_size": 63488 00:22:59.998 }, 00:22:59.998 { 00:22:59.998 "name": "pt4", 00:22:59.998 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:59.998 "is_configured": true, 00:22:59.998 "data_offset": 2048, 00:22:59.998 "data_size": 63488 00:22:59.998 } 00:22:59.998 ] 00:22:59.998 }' 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.998 13:17:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:00.601 [2024-12-06 13:17:47.376630] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 89fe5e7a-0aaf-4fa4-b51d-d4c82262688d '!=' 89fe5e7a-0aaf-4fa4-b51d-d4c82262688d ']' 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84879 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84879 ']' 00:23:00.601 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84879 00:23:00.602 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:23:00.602 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.602 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84879 00:23:00.602 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:00.602 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:00.602 killing process with pid 84879 00:23:00.602 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84879' 00:23:00.602 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84879 00:23:00.602 [2024-12-06 13:17:47.461034] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:00.602 13:17:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84879 00:23:00.602 [2024-12-06 13:17:47.461184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:00.602 [2024-12-06 13:17:47.461319] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:00.602 [2024-12-06 13:17:47.461357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:00.860 [2024-12-06 13:17:47.848329] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:02.234 13:17:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:02.234 00:23:02.234 real 0m9.588s 00:23:02.235 user 0m15.428s 00:23:02.235 sys 0m1.509s 00:23:02.235 13:17:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.235 13:17:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.235 ************************************ 00:23:02.235 END TEST raid5f_superblock_test 00:23:02.235 ************************************ 00:23:02.235 13:17:49 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:23:02.235 13:17:49 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:23:02.235 13:17:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:02.235 13:17:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.235 13:17:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:02.235 ************************************ 00:23:02.235 START TEST raid5f_rebuild_test 00:23:02.235 ************************************ 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85375 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85375 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85375 ']' 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.235 13:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.235 [2024-12-06 13:17:49.161948] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:02.235 [2024-12-06 13:17:49.162723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85375 ] 00:23:02.235 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:02.235 Zero copy mechanism will not be used. 00:23:02.520 [2024-12-06 13:17:49.342305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.520 [2024-12-06 13:17:49.488709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.777 [2024-12-06 13:17:49.712016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:02.777 [2024-12-06 13:17:49.712084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.343 BaseBdev1_malloc 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.343 [2024-12-06 13:17:50.227600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:03.343 [2024-12-06 13:17:50.227689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.343 [2024-12-06 13:17:50.227724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:03.343 [2024-12-06 13:17:50.227743] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.343 [2024-12-06 13:17:50.230848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.343 [2024-12-06 13:17:50.230900] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:03.343 BaseBdev1 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.343 BaseBdev2_malloc 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.343 [2024-12-06 13:17:50.287463] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:03.343 [2024-12-06 13:17:50.287567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.343 [2024-12-06 13:17:50.287599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:03.343 [2024-12-06 13:17:50.287617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.343 [2024-12-06 13:17:50.290667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.343 [2024-12-06 13:17:50.290716] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:03.343 BaseBdev2 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.343 BaseBdev3_malloc 00:23:03.343 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.344 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:03.344 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.344 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.603 [2024-12-06 13:17:50.357701] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:03.603 [2024-12-06 13:17:50.357784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.603 [2024-12-06 13:17:50.357821] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:03.603 [2024-12-06 13:17:50.357840] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.603 [2024-12-06 13:17:50.360829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.603 [2024-12-06 13:17:50.360882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:03.603 BaseBdev3 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.603 BaseBdev4_malloc 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.603 [2024-12-06 13:17:50.413431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:03.603 [2024-12-06 13:17:50.413541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.603 [2024-12-06 13:17:50.413577] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:03.603 [2024-12-06 13:17:50.413595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.603 [2024-12-06 13:17:50.416646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.603 [2024-12-06 13:17:50.416706] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:03.603 BaseBdev4 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.603 spare_malloc 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.603 spare_delay 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.603 [2024-12-06 13:17:50.477417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:03.603 [2024-12-06 13:17:50.477502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.603 [2024-12-06 13:17:50.477535] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:03.603 [2024-12-06 13:17:50.477553] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.603 [2024-12-06 13:17:50.480628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.603 [2024-12-06 13:17:50.480681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:03.603 spare 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.603 [2024-12-06 13:17:50.485559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:03.603 [2024-12-06 13:17:50.488175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:03.603 [2024-12-06 13:17:50.488276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:03.603 [2024-12-06 13:17:50.488361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:03.603 [2024-12-06 13:17:50.488521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:03.603 [2024-12-06 13:17:50.488545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:03.603 [2024-12-06 13:17:50.488901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:03.603 [2024-12-06 13:17:50.495762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:03.603 [2024-12-06 13:17:50.495793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:03.603 [2024-12-06 13:17:50.496067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.603 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.603 "name": "raid_bdev1", 00:23:03.603 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:03.603 "strip_size_kb": 64, 00:23:03.603 "state": "online", 00:23:03.603 "raid_level": "raid5f", 00:23:03.603 "superblock": false, 00:23:03.603 "num_base_bdevs": 4, 00:23:03.603 "num_base_bdevs_discovered": 4, 00:23:03.603 "num_base_bdevs_operational": 4, 00:23:03.603 "base_bdevs_list": [ 00:23:03.603 { 00:23:03.603 "name": "BaseBdev1", 00:23:03.603 "uuid": "571d5f49-6735-558d-a450-bf289094cb28", 00:23:03.603 "is_configured": true, 00:23:03.603 "data_offset": 0, 00:23:03.603 "data_size": 65536 00:23:03.603 }, 00:23:03.603 { 00:23:03.603 "name": "BaseBdev2", 00:23:03.603 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:03.603 "is_configured": true, 00:23:03.603 "data_offset": 0, 00:23:03.603 "data_size": 65536 00:23:03.603 }, 00:23:03.603 { 00:23:03.603 "name": "BaseBdev3", 00:23:03.603 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:03.603 "is_configured": true, 00:23:03.603 "data_offset": 0, 00:23:03.604 "data_size": 65536 00:23:03.604 }, 00:23:03.604 { 00:23:03.604 "name": "BaseBdev4", 00:23:03.604 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:03.604 "is_configured": true, 00:23:03.604 "data_offset": 0, 00:23:03.604 "data_size": 65536 00:23:03.604 } 00:23:03.604 ] 00:23:03.604 }' 00:23:03.604 13:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.604 13:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.221 [2024-12-06 13:17:51.012401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:04.221 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:04.479 [2024-12-06 13:17:51.392277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:04.479 /dev/nbd0 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.479 1+0 records in 00:23:04.479 1+0 records out 00:23:04.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248269 s, 16.5 MB/s 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:23:04.479 13:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:23:05.046 512+0 records in 00:23:05.046 512+0 records out 00:23:05.046 100663296 bytes (101 MB, 96 MiB) copied, 0.594624 s, 169 MB/s 00:23:05.046 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:05.046 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:05.046 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:05.046 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:05.046 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:05.046 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:05.046 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:05.305 [2024-12-06 13:17:52.287150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.305 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:05.305 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:05.305 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:05.305 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:05.305 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:05.305 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:05.305 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:05.305 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:05.305 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:05.305 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.305 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.562 [2024-12-06 13:17:52.318836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:05.562 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:05.563 "name": "raid_bdev1", 00:23:05.563 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:05.563 "strip_size_kb": 64, 00:23:05.563 "state": "online", 00:23:05.563 "raid_level": "raid5f", 00:23:05.563 "superblock": false, 00:23:05.563 "num_base_bdevs": 4, 00:23:05.563 "num_base_bdevs_discovered": 3, 00:23:05.563 "num_base_bdevs_operational": 3, 00:23:05.563 "base_bdevs_list": [ 00:23:05.563 { 00:23:05.563 "name": null, 00:23:05.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.563 "is_configured": false, 00:23:05.563 "data_offset": 0, 00:23:05.563 "data_size": 65536 00:23:05.563 }, 00:23:05.563 { 00:23:05.563 "name": "BaseBdev2", 00:23:05.563 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:05.563 "is_configured": true, 00:23:05.563 "data_offset": 0, 00:23:05.563 "data_size": 65536 00:23:05.563 }, 00:23:05.563 { 00:23:05.563 "name": "BaseBdev3", 00:23:05.563 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:05.563 "is_configured": true, 00:23:05.563 "data_offset": 0, 00:23:05.563 "data_size": 65536 00:23:05.563 }, 00:23:05.563 { 00:23:05.563 "name": "BaseBdev4", 00:23:05.563 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:05.563 "is_configured": true, 00:23:05.563 "data_offset": 0, 00:23:05.563 "data_size": 65536 00:23:05.563 } 00:23:05.563 ] 00:23:05.563 }' 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:05.563 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.821 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:05.821 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.821 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.821 [2024-12-06 13:17:52.814986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:05.821 [2024-12-06 13:17:52.829354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:23:05.821 13:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.821 13:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:06.079 [2024-12-06 13:17:52.838943] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:07.013 "name": "raid_bdev1", 00:23:07.013 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:07.013 "strip_size_kb": 64, 00:23:07.013 "state": "online", 00:23:07.013 "raid_level": "raid5f", 00:23:07.013 "superblock": false, 00:23:07.013 "num_base_bdevs": 4, 00:23:07.013 "num_base_bdevs_discovered": 4, 00:23:07.013 "num_base_bdevs_operational": 4, 00:23:07.013 "process": { 00:23:07.013 "type": "rebuild", 00:23:07.013 "target": "spare", 00:23:07.013 "progress": { 00:23:07.013 "blocks": 17280, 00:23:07.013 "percent": 8 00:23:07.013 } 00:23:07.013 }, 00:23:07.013 "base_bdevs_list": [ 00:23:07.013 { 00:23:07.013 "name": "spare", 00:23:07.013 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:07.013 "is_configured": true, 00:23:07.013 "data_offset": 0, 00:23:07.013 "data_size": 65536 00:23:07.013 }, 00:23:07.013 { 00:23:07.013 "name": "BaseBdev2", 00:23:07.013 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:07.013 "is_configured": true, 00:23:07.013 "data_offset": 0, 00:23:07.013 "data_size": 65536 00:23:07.013 }, 00:23:07.013 { 00:23:07.013 "name": "BaseBdev3", 00:23:07.013 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:07.013 "is_configured": true, 00:23:07.013 "data_offset": 0, 00:23:07.013 "data_size": 65536 00:23:07.013 }, 00:23:07.013 { 00:23:07.013 "name": "BaseBdev4", 00:23:07.013 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:07.013 "is_configured": true, 00:23:07.013 "data_offset": 0, 00:23:07.013 "data_size": 65536 00:23:07.013 } 00:23:07.013 ] 00:23:07.013 }' 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.013 13:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.013 [2024-12-06 13:17:53.996452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:07.270 [2024-12-06 13:17:54.052660] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:07.271 [2024-12-06 13:17:54.053016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.271 [2024-12-06 13:17:54.053049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:07.271 [2024-12-06 13:17:54.053067] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.271 "name": "raid_bdev1", 00:23:07.271 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:07.271 "strip_size_kb": 64, 00:23:07.271 "state": "online", 00:23:07.271 "raid_level": "raid5f", 00:23:07.271 "superblock": false, 00:23:07.271 "num_base_bdevs": 4, 00:23:07.271 "num_base_bdevs_discovered": 3, 00:23:07.271 "num_base_bdevs_operational": 3, 00:23:07.271 "base_bdevs_list": [ 00:23:07.271 { 00:23:07.271 "name": null, 00:23:07.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.271 "is_configured": false, 00:23:07.271 "data_offset": 0, 00:23:07.271 "data_size": 65536 00:23:07.271 }, 00:23:07.271 { 00:23:07.271 "name": "BaseBdev2", 00:23:07.271 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:07.271 "is_configured": true, 00:23:07.271 "data_offset": 0, 00:23:07.271 "data_size": 65536 00:23:07.271 }, 00:23:07.271 { 00:23:07.271 "name": "BaseBdev3", 00:23:07.271 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:07.271 "is_configured": true, 00:23:07.271 "data_offset": 0, 00:23:07.271 "data_size": 65536 00:23:07.271 }, 00:23:07.271 { 00:23:07.271 "name": "BaseBdev4", 00:23:07.271 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:07.271 "is_configured": true, 00:23:07.271 "data_offset": 0, 00:23:07.271 "data_size": 65536 00:23:07.271 } 00:23:07.271 ] 00:23:07.271 }' 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.271 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:07.837 "name": "raid_bdev1", 00:23:07.837 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:07.837 "strip_size_kb": 64, 00:23:07.837 "state": "online", 00:23:07.837 "raid_level": "raid5f", 00:23:07.837 "superblock": false, 00:23:07.837 "num_base_bdevs": 4, 00:23:07.837 "num_base_bdevs_discovered": 3, 00:23:07.837 "num_base_bdevs_operational": 3, 00:23:07.837 "base_bdevs_list": [ 00:23:07.837 { 00:23:07.837 "name": null, 00:23:07.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.837 "is_configured": false, 00:23:07.837 "data_offset": 0, 00:23:07.837 "data_size": 65536 00:23:07.837 }, 00:23:07.837 { 00:23:07.837 "name": "BaseBdev2", 00:23:07.837 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:07.837 "is_configured": true, 00:23:07.837 "data_offset": 0, 00:23:07.837 "data_size": 65536 00:23:07.837 }, 00:23:07.837 { 00:23:07.837 "name": "BaseBdev3", 00:23:07.837 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:07.837 "is_configured": true, 00:23:07.837 "data_offset": 0, 00:23:07.837 "data_size": 65536 00:23:07.837 }, 00:23:07.837 { 00:23:07.837 "name": "BaseBdev4", 00:23:07.837 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:07.837 "is_configured": true, 00:23:07.837 "data_offset": 0, 00:23:07.837 "data_size": 65536 00:23:07.837 } 00:23:07.837 ] 00:23:07.837 }' 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.837 [2024-12-06 13:17:54.724049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:07.837 [2024-12-06 13:17:54.737516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.837 13:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:07.837 [2024-12-06 13:17:54.746438] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.798 "name": "raid_bdev1", 00:23:08.798 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:08.798 "strip_size_kb": 64, 00:23:08.798 "state": "online", 00:23:08.798 "raid_level": "raid5f", 00:23:08.798 "superblock": false, 00:23:08.798 "num_base_bdevs": 4, 00:23:08.798 "num_base_bdevs_discovered": 4, 00:23:08.798 "num_base_bdevs_operational": 4, 00:23:08.798 "process": { 00:23:08.798 "type": "rebuild", 00:23:08.798 "target": "spare", 00:23:08.798 "progress": { 00:23:08.798 "blocks": 17280, 00:23:08.798 "percent": 8 00:23:08.798 } 00:23:08.798 }, 00:23:08.798 "base_bdevs_list": [ 00:23:08.798 { 00:23:08.798 "name": "spare", 00:23:08.798 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:08.798 "is_configured": true, 00:23:08.798 "data_offset": 0, 00:23:08.798 "data_size": 65536 00:23:08.798 }, 00:23:08.798 { 00:23:08.798 "name": "BaseBdev2", 00:23:08.798 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:08.798 "is_configured": true, 00:23:08.798 "data_offset": 0, 00:23:08.798 "data_size": 65536 00:23:08.798 }, 00:23:08.798 { 00:23:08.798 "name": "BaseBdev3", 00:23:08.798 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:08.798 "is_configured": true, 00:23:08.798 "data_offset": 0, 00:23:08.798 "data_size": 65536 00:23:08.798 }, 00:23:08.798 { 00:23:08.798 "name": "BaseBdev4", 00:23:08.798 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:08.798 "is_configured": true, 00:23:08.798 "data_offset": 0, 00:23:08.798 "data_size": 65536 00:23:08.798 } 00:23:08.798 ] 00:23:08.798 }' 00:23:08.798 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=686 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:09.057 "name": "raid_bdev1", 00:23:09.057 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:09.057 "strip_size_kb": 64, 00:23:09.057 "state": "online", 00:23:09.057 "raid_level": "raid5f", 00:23:09.057 "superblock": false, 00:23:09.057 "num_base_bdevs": 4, 00:23:09.057 "num_base_bdevs_discovered": 4, 00:23:09.057 "num_base_bdevs_operational": 4, 00:23:09.057 "process": { 00:23:09.057 "type": "rebuild", 00:23:09.057 "target": "spare", 00:23:09.057 "progress": { 00:23:09.057 "blocks": 21120, 00:23:09.057 "percent": 10 00:23:09.057 } 00:23:09.057 }, 00:23:09.057 "base_bdevs_list": [ 00:23:09.057 { 00:23:09.057 "name": "spare", 00:23:09.057 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:09.057 "is_configured": true, 00:23:09.057 "data_offset": 0, 00:23:09.057 "data_size": 65536 00:23:09.057 }, 00:23:09.057 { 00:23:09.057 "name": "BaseBdev2", 00:23:09.057 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:09.057 "is_configured": true, 00:23:09.057 "data_offset": 0, 00:23:09.057 "data_size": 65536 00:23:09.057 }, 00:23:09.057 { 00:23:09.057 "name": "BaseBdev3", 00:23:09.057 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:09.057 "is_configured": true, 00:23:09.057 "data_offset": 0, 00:23:09.057 "data_size": 65536 00:23:09.057 }, 00:23:09.057 { 00:23:09.057 "name": "BaseBdev4", 00:23:09.057 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:09.057 "is_configured": true, 00:23:09.057 "data_offset": 0, 00:23:09.057 "data_size": 65536 00:23:09.057 } 00:23:09.057 ] 00:23:09.057 }' 00:23:09.057 13:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:09.057 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.057 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:09.057 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.057 13:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.438 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:10.438 "name": "raid_bdev1", 00:23:10.438 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:10.438 "strip_size_kb": 64, 00:23:10.439 "state": "online", 00:23:10.439 "raid_level": "raid5f", 00:23:10.439 "superblock": false, 00:23:10.439 "num_base_bdevs": 4, 00:23:10.439 "num_base_bdevs_discovered": 4, 00:23:10.439 "num_base_bdevs_operational": 4, 00:23:10.439 "process": { 00:23:10.439 "type": "rebuild", 00:23:10.439 "target": "spare", 00:23:10.439 "progress": { 00:23:10.439 "blocks": 42240, 00:23:10.439 "percent": 21 00:23:10.439 } 00:23:10.439 }, 00:23:10.439 "base_bdevs_list": [ 00:23:10.439 { 00:23:10.439 "name": "spare", 00:23:10.439 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:10.439 "is_configured": true, 00:23:10.439 "data_offset": 0, 00:23:10.439 "data_size": 65536 00:23:10.439 }, 00:23:10.439 { 00:23:10.439 "name": "BaseBdev2", 00:23:10.439 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:10.439 "is_configured": true, 00:23:10.439 "data_offset": 0, 00:23:10.439 "data_size": 65536 00:23:10.439 }, 00:23:10.439 { 00:23:10.439 "name": "BaseBdev3", 00:23:10.439 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:10.439 "is_configured": true, 00:23:10.439 "data_offset": 0, 00:23:10.439 "data_size": 65536 00:23:10.439 }, 00:23:10.439 { 00:23:10.439 "name": "BaseBdev4", 00:23:10.439 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:10.439 "is_configured": true, 00:23:10.439 "data_offset": 0, 00:23:10.439 "data_size": 65536 00:23:10.439 } 00:23:10.439 ] 00:23:10.439 }' 00:23:10.439 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:10.439 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:10.439 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:10.439 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:10.439 13:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:11.375 "name": "raid_bdev1", 00:23:11.375 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:11.375 "strip_size_kb": 64, 00:23:11.375 "state": "online", 00:23:11.375 "raid_level": "raid5f", 00:23:11.375 "superblock": false, 00:23:11.375 "num_base_bdevs": 4, 00:23:11.375 "num_base_bdevs_discovered": 4, 00:23:11.375 "num_base_bdevs_operational": 4, 00:23:11.375 "process": { 00:23:11.375 "type": "rebuild", 00:23:11.375 "target": "spare", 00:23:11.375 "progress": { 00:23:11.375 "blocks": 65280, 00:23:11.375 "percent": 33 00:23:11.375 } 00:23:11.375 }, 00:23:11.375 "base_bdevs_list": [ 00:23:11.375 { 00:23:11.375 "name": "spare", 00:23:11.375 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:11.375 "is_configured": true, 00:23:11.375 "data_offset": 0, 00:23:11.375 "data_size": 65536 00:23:11.375 }, 00:23:11.375 { 00:23:11.375 "name": "BaseBdev2", 00:23:11.375 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:11.375 "is_configured": true, 00:23:11.375 "data_offset": 0, 00:23:11.375 "data_size": 65536 00:23:11.375 }, 00:23:11.375 { 00:23:11.375 "name": "BaseBdev3", 00:23:11.375 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:11.375 "is_configured": true, 00:23:11.375 "data_offset": 0, 00:23:11.375 "data_size": 65536 00:23:11.375 }, 00:23:11.375 { 00:23:11.375 "name": "BaseBdev4", 00:23:11.375 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:11.375 "is_configured": true, 00:23:11.375 "data_offset": 0, 00:23:11.375 "data_size": 65536 00:23:11.375 } 00:23:11.375 ] 00:23:11.375 }' 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:11.375 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:11.634 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:11.634 13:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:12.569 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:12.569 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:12.569 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:12.569 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:12.569 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:12.569 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:12.569 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.570 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.570 13:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.570 13:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.570 13:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.570 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:12.570 "name": "raid_bdev1", 00:23:12.570 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:12.570 "strip_size_kb": 64, 00:23:12.570 "state": "online", 00:23:12.570 "raid_level": "raid5f", 00:23:12.570 "superblock": false, 00:23:12.570 "num_base_bdevs": 4, 00:23:12.570 "num_base_bdevs_discovered": 4, 00:23:12.570 "num_base_bdevs_operational": 4, 00:23:12.570 "process": { 00:23:12.570 "type": "rebuild", 00:23:12.570 "target": "spare", 00:23:12.570 "progress": { 00:23:12.570 "blocks": 88320, 00:23:12.570 "percent": 44 00:23:12.570 } 00:23:12.570 }, 00:23:12.570 "base_bdevs_list": [ 00:23:12.570 { 00:23:12.570 "name": "spare", 00:23:12.570 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:12.570 "is_configured": true, 00:23:12.570 "data_offset": 0, 00:23:12.570 "data_size": 65536 00:23:12.570 }, 00:23:12.570 { 00:23:12.570 "name": "BaseBdev2", 00:23:12.570 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:12.570 "is_configured": true, 00:23:12.570 "data_offset": 0, 00:23:12.570 "data_size": 65536 00:23:12.570 }, 00:23:12.570 { 00:23:12.570 "name": "BaseBdev3", 00:23:12.570 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:12.570 "is_configured": true, 00:23:12.570 "data_offset": 0, 00:23:12.570 "data_size": 65536 00:23:12.570 }, 00:23:12.570 { 00:23:12.570 "name": "BaseBdev4", 00:23:12.570 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:12.570 "is_configured": true, 00:23:12.570 "data_offset": 0, 00:23:12.570 "data_size": 65536 00:23:12.570 } 00:23:12.570 ] 00:23:12.570 }' 00:23:12.570 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:12.570 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:12.570 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:12.570 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:12.570 13:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:13.569 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:13.569 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.569 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:13.569 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:13.569 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:13.569 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:13.570 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.570 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.570 13:18:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.570 13:18:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.828 13:18:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.828 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:13.828 "name": "raid_bdev1", 00:23:13.828 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:13.828 "strip_size_kb": 64, 00:23:13.828 "state": "online", 00:23:13.828 "raid_level": "raid5f", 00:23:13.828 "superblock": false, 00:23:13.828 "num_base_bdevs": 4, 00:23:13.828 "num_base_bdevs_discovered": 4, 00:23:13.828 "num_base_bdevs_operational": 4, 00:23:13.828 "process": { 00:23:13.828 "type": "rebuild", 00:23:13.828 "target": "spare", 00:23:13.828 "progress": { 00:23:13.828 "blocks": 109440, 00:23:13.828 "percent": 55 00:23:13.828 } 00:23:13.828 }, 00:23:13.828 "base_bdevs_list": [ 00:23:13.828 { 00:23:13.828 "name": "spare", 00:23:13.828 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:13.828 "is_configured": true, 00:23:13.828 "data_offset": 0, 00:23:13.828 "data_size": 65536 00:23:13.828 }, 00:23:13.828 { 00:23:13.828 "name": "BaseBdev2", 00:23:13.828 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:13.828 "is_configured": true, 00:23:13.828 "data_offset": 0, 00:23:13.828 "data_size": 65536 00:23:13.828 }, 00:23:13.828 { 00:23:13.828 "name": "BaseBdev3", 00:23:13.828 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:13.828 "is_configured": true, 00:23:13.828 "data_offset": 0, 00:23:13.828 "data_size": 65536 00:23:13.828 }, 00:23:13.828 { 00:23:13.828 "name": "BaseBdev4", 00:23:13.828 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:13.828 "is_configured": true, 00:23:13.828 "data_offset": 0, 00:23:13.828 "data_size": 65536 00:23:13.828 } 00:23:13.828 ] 00:23:13.828 }' 00:23:13.828 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:13.828 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:13.828 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:13.828 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:13.828 13:18:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:14.763 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:14.763 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:14.763 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:14.763 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:14.763 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:14.763 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:14.763 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.763 13:18:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.763 13:18:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.763 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.763 13:18:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.021 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:15.021 "name": "raid_bdev1", 00:23:15.021 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:15.021 "strip_size_kb": 64, 00:23:15.021 "state": "online", 00:23:15.021 "raid_level": "raid5f", 00:23:15.021 "superblock": false, 00:23:15.021 "num_base_bdevs": 4, 00:23:15.021 "num_base_bdevs_discovered": 4, 00:23:15.021 "num_base_bdevs_operational": 4, 00:23:15.021 "process": { 00:23:15.021 "type": "rebuild", 00:23:15.021 "target": "spare", 00:23:15.021 "progress": { 00:23:15.021 "blocks": 130560, 00:23:15.021 "percent": 66 00:23:15.021 } 00:23:15.021 }, 00:23:15.021 "base_bdevs_list": [ 00:23:15.021 { 00:23:15.021 "name": "spare", 00:23:15.021 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:15.021 "is_configured": true, 00:23:15.021 "data_offset": 0, 00:23:15.021 "data_size": 65536 00:23:15.021 }, 00:23:15.021 { 00:23:15.021 "name": "BaseBdev2", 00:23:15.021 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:15.021 "is_configured": true, 00:23:15.021 "data_offset": 0, 00:23:15.021 "data_size": 65536 00:23:15.021 }, 00:23:15.021 { 00:23:15.021 "name": "BaseBdev3", 00:23:15.021 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:15.021 "is_configured": true, 00:23:15.021 "data_offset": 0, 00:23:15.021 "data_size": 65536 00:23:15.021 }, 00:23:15.021 { 00:23:15.021 "name": "BaseBdev4", 00:23:15.021 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:15.021 "is_configured": true, 00:23:15.021 "data_offset": 0, 00:23:15.021 "data_size": 65536 00:23:15.021 } 00:23:15.021 ] 00:23:15.021 }' 00:23:15.021 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:15.021 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:15.021 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:15.021 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:15.021 13:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.956 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:15.956 "name": "raid_bdev1", 00:23:15.956 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:15.956 "strip_size_kb": 64, 00:23:15.956 "state": "online", 00:23:15.956 "raid_level": "raid5f", 00:23:15.956 "superblock": false, 00:23:15.956 "num_base_bdevs": 4, 00:23:15.956 "num_base_bdevs_discovered": 4, 00:23:15.956 "num_base_bdevs_operational": 4, 00:23:15.956 "process": { 00:23:15.956 "type": "rebuild", 00:23:15.956 "target": "spare", 00:23:15.956 "progress": { 00:23:15.956 "blocks": 153600, 00:23:15.956 "percent": 78 00:23:15.956 } 00:23:15.956 }, 00:23:15.956 "base_bdevs_list": [ 00:23:15.956 { 00:23:15.956 "name": "spare", 00:23:15.956 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:15.956 "is_configured": true, 00:23:15.956 "data_offset": 0, 00:23:15.956 "data_size": 65536 00:23:15.956 }, 00:23:15.956 { 00:23:15.956 "name": "BaseBdev2", 00:23:15.956 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:15.956 "is_configured": true, 00:23:15.956 "data_offset": 0, 00:23:15.956 "data_size": 65536 00:23:15.956 }, 00:23:15.956 { 00:23:15.956 "name": "BaseBdev3", 00:23:15.956 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:15.956 "is_configured": true, 00:23:15.956 "data_offset": 0, 00:23:15.956 "data_size": 65536 00:23:15.956 }, 00:23:15.956 { 00:23:15.956 "name": "BaseBdev4", 00:23:15.956 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:15.956 "is_configured": true, 00:23:15.956 "data_offset": 0, 00:23:15.957 "data_size": 65536 00:23:15.957 } 00:23:15.957 ] 00:23:15.957 }' 00:23:15.957 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:16.215 13:18:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:16.215 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:16.215 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:16.215 13:18:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.152 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:17.152 "name": "raid_bdev1", 00:23:17.152 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:17.152 "strip_size_kb": 64, 00:23:17.152 "state": "online", 00:23:17.152 "raid_level": "raid5f", 00:23:17.152 "superblock": false, 00:23:17.152 "num_base_bdevs": 4, 00:23:17.152 "num_base_bdevs_discovered": 4, 00:23:17.152 "num_base_bdevs_operational": 4, 00:23:17.152 "process": { 00:23:17.152 "type": "rebuild", 00:23:17.152 "target": "spare", 00:23:17.152 "progress": { 00:23:17.153 "blocks": 176640, 00:23:17.153 "percent": 89 00:23:17.153 } 00:23:17.153 }, 00:23:17.153 "base_bdevs_list": [ 00:23:17.153 { 00:23:17.153 "name": "spare", 00:23:17.153 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:17.153 "is_configured": true, 00:23:17.153 "data_offset": 0, 00:23:17.153 "data_size": 65536 00:23:17.153 }, 00:23:17.153 { 00:23:17.153 "name": "BaseBdev2", 00:23:17.153 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:17.153 "is_configured": true, 00:23:17.153 "data_offset": 0, 00:23:17.153 "data_size": 65536 00:23:17.153 }, 00:23:17.153 { 00:23:17.153 "name": "BaseBdev3", 00:23:17.153 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:17.153 "is_configured": true, 00:23:17.153 "data_offset": 0, 00:23:17.153 "data_size": 65536 00:23:17.153 }, 00:23:17.153 { 00:23:17.153 "name": "BaseBdev4", 00:23:17.153 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:17.153 "is_configured": true, 00:23:17.153 "data_offset": 0, 00:23:17.153 "data_size": 65536 00:23:17.153 } 00:23:17.153 ] 00:23:17.153 }' 00:23:17.153 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:17.411 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.411 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:17.411 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.411 13:18:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:18.430 [2024-12-06 13:18:05.179733] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:18.430 [2024-12-06 13:18:05.179846] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:18.430 [2024-12-06 13:18:05.179949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:18.430 "name": "raid_bdev1", 00:23:18.430 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:18.430 "strip_size_kb": 64, 00:23:18.430 "state": "online", 00:23:18.430 "raid_level": "raid5f", 00:23:18.430 "superblock": false, 00:23:18.430 "num_base_bdevs": 4, 00:23:18.430 "num_base_bdevs_discovered": 4, 00:23:18.430 "num_base_bdevs_operational": 4, 00:23:18.430 "base_bdevs_list": [ 00:23:18.430 { 00:23:18.430 "name": "spare", 00:23:18.430 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:18.430 "is_configured": true, 00:23:18.430 "data_offset": 0, 00:23:18.430 "data_size": 65536 00:23:18.430 }, 00:23:18.430 { 00:23:18.430 "name": "BaseBdev2", 00:23:18.430 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:18.430 "is_configured": true, 00:23:18.430 "data_offset": 0, 00:23:18.430 "data_size": 65536 00:23:18.430 }, 00:23:18.430 { 00:23:18.430 "name": "BaseBdev3", 00:23:18.430 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:18.430 "is_configured": true, 00:23:18.430 "data_offset": 0, 00:23:18.430 "data_size": 65536 00:23:18.430 }, 00:23:18.430 { 00:23:18.430 "name": "BaseBdev4", 00:23:18.430 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:18.430 "is_configured": true, 00:23:18.430 "data_offset": 0, 00:23:18.430 "data_size": 65536 00:23:18.430 } 00:23:18.430 ] 00:23:18.430 }' 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:18.430 "name": "raid_bdev1", 00:23:18.430 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:18.430 "strip_size_kb": 64, 00:23:18.430 "state": "online", 00:23:18.430 "raid_level": "raid5f", 00:23:18.430 "superblock": false, 00:23:18.430 "num_base_bdevs": 4, 00:23:18.430 "num_base_bdevs_discovered": 4, 00:23:18.430 "num_base_bdevs_operational": 4, 00:23:18.430 "base_bdevs_list": [ 00:23:18.430 { 00:23:18.430 "name": "spare", 00:23:18.430 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:18.430 "is_configured": true, 00:23:18.430 "data_offset": 0, 00:23:18.430 "data_size": 65536 00:23:18.430 }, 00:23:18.430 { 00:23:18.430 "name": "BaseBdev2", 00:23:18.430 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:18.430 "is_configured": true, 00:23:18.430 "data_offset": 0, 00:23:18.430 "data_size": 65536 00:23:18.430 }, 00:23:18.430 { 00:23:18.430 "name": "BaseBdev3", 00:23:18.430 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:18.430 "is_configured": true, 00:23:18.430 "data_offset": 0, 00:23:18.430 "data_size": 65536 00:23:18.430 }, 00:23:18.430 { 00:23:18.430 "name": "BaseBdev4", 00:23:18.430 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:18.430 "is_configured": true, 00:23:18.430 "data_offset": 0, 00:23:18.430 "data_size": 65536 00:23:18.430 } 00:23:18.430 ] 00:23:18.430 }' 00:23:18.430 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.689 "name": "raid_bdev1", 00:23:18.689 "uuid": "16f913dd-8c01-4186-ab39-7bb887181980", 00:23:18.689 "strip_size_kb": 64, 00:23:18.689 "state": "online", 00:23:18.689 "raid_level": "raid5f", 00:23:18.689 "superblock": false, 00:23:18.689 "num_base_bdevs": 4, 00:23:18.689 "num_base_bdevs_discovered": 4, 00:23:18.689 "num_base_bdevs_operational": 4, 00:23:18.689 "base_bdevs_list": [ 00:23:18.689 { 00:23:18.689 "name": "spare", 00:23:18.689 "uuid": "8efcecd6-a30c-57be-80a7-3a1eda5fc256", 00:23:18.689 "is_configured": true, 00:23:18.689 "data_offset": 0, 00:23:18.689 "data_size": 65536 00:23:18.689 }, 00:23:18.689 { 00:23:18.689 "name": "BaseBdev2", 00:23:18.689 "uuid": "0f3410b9-c750-5b34-a626-534fe1c584c5", 00:23:18.689 "is_configured": true, 00:23:18.689 "data_offset": 0, 00:23:18.689 "data_size": 65536 00:23:18.689 }, 00:23:18.689 { 00:23:18.689 "name": "BaseBdev3", 00:23:18.689 "uuid": "acfacd8a-e998-572b-9959-d818cb389024", 00:23:18.689 "is_configured": true, 00:23:18.689 "data_offset": 0, 00:23:18.689 "data_size": 65536 00:23:18.689 }, 00:23:18.689 { 00:23:18.689 "name": "BaseBdev4", 00:23:18.689 "uuid": "21888c3c-fb5e-596b-85dc-d8cd76265334", 00:23:18.689 "is_configured": true, 00:23:18.689 "data_offset": 0, 00:23:18.689 "data_size": 65536 00:23:18.689 } 00:23:18.689 ] 00:23:18.689 }' 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.689 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.256 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:19.256 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.256 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.256 [2024-12-06 13:18:05.989317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:19.256 [2024-12-06 13:18:05.989376] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:19.256 [2024-12-06 13:18:05.989557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:19.256 [2024-12-06 13:18:05.989707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:19.256 [2024-12-06 13:18:05.989727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:19.256 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.256 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:23:19.256 13:18:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.256 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.256 13:18:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:19.256 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:19.515 /dev/nbd0 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:19.515 1+0 records in 00:23:19.515 1+0 records out 00:23:19.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588537 s, 7.0 MB/s 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:19.515 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:19.774 /dev/nbd1 00:23:19.774 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:19.774 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:19.774 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:19.774 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:23:19.774 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:19.774 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:19.774 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:19.774 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:23:19.774 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:19.774 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:19.774 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:19.774 1+0 records in 00:23:19.774 1+0 records out 00:23:19.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037796 s, 10.8 MB/s 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:20.032 13:18:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:20.292 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:20.292 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:20.292 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:20.292 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:20.292 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:20.292 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:20.292 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:20.292 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:20.292 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:20.292 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85375 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85375 ']' 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85375 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:23:20.551 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.812 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85375 00:23:20.812 killing process with pid 85375 00:23:20.812 Received shutdown signal, test time was about 60.000000 seconds 00:23:20.812 00:23:20.812 Latency(us) 00:23:20.812 [2024-12-06T13:18:07.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.812 [2024-12-06T13:18:07.828Z] =================================================================================================================== 00:23:20.812 [2024-12-06T13:18:07.828Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.812 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.812 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.812 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85375' 00:23:20.812 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85375 00:23:20.812 [2024-12-06 13:18:07.593424] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:20.812 13:18:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85375 00:23:21.071 [2024-12-06 13:18:08.036479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:22.445 13:18:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:23:22.445 ************************************ 00:23:22.445 END TEST raid5f_rebuild_test 00:23:22.445 ************************************ 00:23:22.445 00:23:22.445 real 0m20.117s 00:23:22.445 user 0m24.822s 00:23:22.445 sys 0m2.306s 00:23:22.445 13:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.445 13:18:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.446 13:18:09 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:23:22.446 13:18:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:22.446 13:18:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.446 13:18:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:22.446 ************************************ 00:23:22.446 START TEST raid5f_rebuild_test_sb 00:23:22.446 ************************************ 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85880 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85880 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85880 ']' 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.446 13:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.446 [2024-12-06 13:18:09.332647] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:22.446 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:22.446 Zero copy mechanism will not be used. 00:23:22.446 [2024-12-06 13:18:09.333604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85880 ] 00:23:22.704 [2024-12-06 13:18:09.504689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.704 [2024-12-06 13:18:09.645474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.972 [2024-12-06 13:18:09.860039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:22.972 [2024-12-06 13:18:09.860089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.554 BaseBdev1_malloc 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.554 [2024-12-06 13:18:10.400222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:23.554 [2024-12-06 13:18:10.400305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.554 [2024-12-06 13:18:10.400340] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:23.554 [2024-12-06 13:18:10.400360] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.554 [2024-12-06 13:18:10.403405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.554 [2024-12-06 13:18:10.403456] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:23.554 BaseBdev1 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.554 BaseBdev2_malloc 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.554 [2024-12-06 13:18:10.454662] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:23.554 [2024-12-06 13:18:10.454752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.554 [2024-12-06 13:18:10.454784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:23.554 [2024-12-06 13:18:10.454804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.554 [2024-12-06 13:18:10.458079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.554 [2024-12-06 13:18:10.458320] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:23.554 BaseBdev2 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.554 BaseBdev3_malloc 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.554 [2024-12-06 13:18:10.527680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:23.554 [2024-12-06 13:18:10.527774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.554 [2024-12-06 13:18:10.527810] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:23.554 [2024-12-06 13:18:10.527829] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.554 [2024-12-06 13:18:10.530888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.554 [2024-12-06 13:18:10.531101] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:23.554 BaseBdev3 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.554 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.813 BaseBdev4_malloc 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.813 [2024-12-06 13:18:10.588053] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:23.813 [2024-12-06 13:18:10.588137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.813 [2024-12-06 13:18:10.588172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:23.813 [2024-12-06 13:18:10.588191] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.813 [2024-12-06 13:18:10.591545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.813 [2024-12-06 13:18:10.591616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:23.813 BaseBdev4 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.813 spare_malloc 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.813 spare_delay 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.813 [2024-12-06 13:18:10.658341] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:23.813 [2024-12-06 13:18:10.658417] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.813 [2024-12-06 13:18:10.658458] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:23.813 [2024-12-06 13:18:10.658533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.813 [2024-12-06 13:18:10.661917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.813 [2024-12-06 13:18:10.661982] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:23.813 spare 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.813 [2024-12-06 13:18:10.670380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:23.813 [2024-12-06 13:18:10.673430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:23.813 [2024-12-06 13:18:10.673550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:23.813 [2024-12-06 13:18:10.673633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:23.813 [2024-12-06 13:18:10.673922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:23.813 [2024-12-06 13:18:10.673945] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:23.813 [2024-12-06 13:18:10.674277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:23.813 [2024-12-06 13:18:10.681057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:23.813 [2024-12-06 13:18:10.681087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:23.813 [2024-12-06 13:18:10.681308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.813 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.814 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.814 "name": "raid_bdev1", 00:23:23.814 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:23.814 "strip_size_kb": 64, 00:23:23.814 "state": "online", 00:23:23.814 "raid_level": "raid5f", 00:23:23.814 "superblock": true, 00:23:23.814 "num_base_bdevs": 4, 00:23:23.814 "num_base_bdevs_discovered": 4, 00:23:23.814 "num_base_bdevs_operational": 4, 00:23:23.814 "base_bdevs_list": [ 00:23:23.814 { 00:23:23.814 "name": "BaseBdev1", 00:23:23.814 "uuid": "98a5bce3-da84-5fe5-84d8-fad984548ce4", 00:23:23.814 "is_configured": true, 00:23:23.814 "data_offset": 2048, 00:23:23.814 "data_size": 63488 00:23:23.814 }, 00:23:23.814 { 00:23:23.814 "name": "BaseBdev2", 00:23:23.814 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:23.814 "is_configured": true, 00:23:23.814 "data_offset": 2048, 00:23:23.814 "data_size": 63488 00:23:23.814 }, 00:23:23.814 { 00:23:23.814 "name": "BaseBdev3", 00:23:23.814 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:23.814 "is_configured": true, 00:23:23.814 "data_offset": 2048, 00:23:23.814 "data_size": 63488 00:23:23.814 }, 00:23:23.814 { 00:23:23.814 "name": "BaseBdev4", 00:23:23.814 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:23.814 "is_configured": true, 00:23:23.814 "data_offset": 2048, 00:23:23.814 "data_size": 63488 00:23:23.814 } 00:23:23.814 ] 00:23:23.814 }' 00:23:23.814 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.814 13:18:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.382 [2024-12-06 13:18:11.221297] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:24.382 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:24.641 [2024-12-06 13:18:11.593207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:24.641 /dev/nbd0 00:23:24.641 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:24.641 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:24.641 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:24.641 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:23:24.641 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:24.641 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:24.642 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:24.642 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:23:24.642 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:24.642 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:24.642 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:24.642 1+0 records in 00:23:24.642 1+0 records out 00:23:24.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372987 s, 11.0 MB/s 00:23:24.901 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:24.901 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:23:24.901 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:24.901 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:24.901 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:23:24.901 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:24.901 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:24.901 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:23:24.901 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:23:24.901 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:23:24.901 13:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:23:25.469 496+0 records in 00:23:25.469 496+0 records out 00:23:25.469 97517568 bytes (98 MB, 93 MiB) copied, 0.612744 s, 159 MB/s 00:23:25.469 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:25.469 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:25.469 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:25.469 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:25.469 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:25.469 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:25.469 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:25.728 [2024-12-06 13:18:12.562557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.728 [2024-12-06 13:18:12.575123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.728 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.729 "name": "raid_bdev1", 00:23:25.729 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:25.729 "strip_size_kb": 64, 00:23:25.729 "state": "online", 00:23:25.729 "raid_level": "raid5f", 00:23:25.729 "superblock": true, 00:23:25.729 "num_base_bdevs": 4, 00:23:25.729 "num_base_bdevs_discovered": 3, 00:23:25.729 "num_base_bdevs_operational": 3, 00:23:25.729 "base_bdevs_list": [ 00:23:25.729 { 00:23:25.729 "name": null, 00:23:25.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.729 "is_configured": false, 00:23:25.729 "data_offset": 0, 00:23:25.729 "data_size": 63488 00:23:25.729 }, 00:23:25.729 { 00:23:25.729 "name": "BaseBdev2", 00:23:25.729 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:25.729 "is_configured": true, 00:23:25.729 "data_offset": 2048, 00:23:25.729 "data_size": 63488 00:23:25.729 }, 00:23:25.729 { 00:23:25.729 "name": "BaseBdev3", 00:23:25.729 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:25.729 "is_configured": true, 00:23:25.729 "data_offset": 2048, 00:23:25.729 "data_size": 63488 00:23:25.729 }, 00:23:25.729 { 00:23:25.729 "name": "BaseBdev4", 00:23:25.729 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:25.729 "is_configured": true, 00:23:25.729 "data_offset": 2048, 00:23:25.729 "data_size": 63488 00:23:25.729 } 00:23:25.729 ] 00:23:25.729 }' 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.729 13:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.297 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:26.297 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.297 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:26.297 [2024-12-06 13:18:13.091372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:26.297 [2024-12-06 13:18:13.105698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:23:26.297 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.297 13:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:26.297 [2024-12-06 13:18:13.114692] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:27.235 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:27.235 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:27.235 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:27.235 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:27.235 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:27.235 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.235 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.235 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.235 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.235 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.235 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:27.235 "name": "raid_bdev1", 00:23:27.235 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:27.235 "strip_size_kb": 64, 00:23:27.235 "state": "online", 00:23:27.235 "raid_level": "raid5f", 00:23:27.235 "superblock": true, 00:23:27.235 "num_base_bdevs": 4, 00:23:27.235 "num_base_bdevs_discovered": 4, 00:23:27.235 "num_base_bdevs_operational": 4, 00:23:27.235 "process": { 00:23:27.235 "type": "rebuild", 00:23:27.235 "target": "spare", 00:23:27.235 "progress": { 00:23:27.236 "blocks": 17280, 00:23:27.236 "percent": 9 00:23:27.236 } 00:23:27.236 }, 00:23:27.236 "base_bdevs_list": [ 00:23:27.236 { 00:23:27.236 "name": "spare", 00:23:27.236 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:27.236 "is_configured": true, 00:23:27.236 "data_offset": 2048, 00:23:27.236 "data_size": 63488 00:23:27.236 }, 00:23:27.236 { 00:23:27.236 "name": "BaseBdev2", 00:23:27.236 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:27.236 "is_configured": true, 00:23:27.236 "data_offset": 2048, 00:23:27.236 "data_size": 63488 00:23:27.236 }, 00:23:27.236 { 00:23:27.236 "name": "BaseBdev3", 00:23:27.236 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:27.236 "is_configured": true, 00:23:27.236 "data_offset": 2048, 00:23:27.236 "data_size": 63488 00:23:27.236 }, 00:23:27.236 { 00:23:27.236 "name": "BaseBdev4", 00:23:27.236 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:27.236 "is_configured": true, 00:23:27.236 "data_offset": 2048, 00:23:27.236 "data_size": 63488 00:23:27.236 } 00:23:27.236 ] 00:23:27.236 }' 00:23:27.236 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:27.236 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:27.236 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.495 [2024-12-06 13:18:14.280169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:27.495 [2024-12-06 13:18:14.328126] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:27.495 [2024-12-06 13:18:14.328224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.495 [2024-12-06 13:18:14.328248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:27.495 [2024-12-06 13:18:14.328262] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.495 "name": "raid_bdev1", 00:23:27.495 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:27.495 "strip_size_kb": 64, 00:23:27.495 "state": "online", 00:23:27.495 "raid_level": "raid5f", 00:23:27.495 "superblock": true, 00:23:27.495 "num_base_bdevs": 4, 00:23:27.495 "num_base_bdevs_discovered": 3, 00:23:27.495 "num_base_bdevs_operational": 3, 00:23:27.495 "base_bdevs_list": [ 00:23:27.495 { 00:23:27.495 "name": null, 00:23:27.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.495 "is_configured": false, 00:23:27.495 "data_offset": 0, 00:23:27.495 "data_size": 63488 00:23:27.495 }, 00:23:27.495 { 00:23:27.495 "name": "BaseBdev2", 00:23:27.495 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:27.495 "is_configured": true, 00:23:27.495 "data_offset": 2048, 00:23:27.495 "data_size": 63488 00:23:27.495 }, 00:23:27.495 { 00:23:27.495 "name": "BaseBdev3", 00:23:27.495 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:27.495 "is_configured": true, 00:23:27.495 "data_offset": 2048, 00:23:27.495 "data_size": 63488 00:23:27.495 }, 00:23:27.495 { 00:23:27.495 "name": "BaseBdev4", 00:23:27.495 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:27.495 "is_configured": true, 00:23:27.495 "data_offset": 2048, 00:23:27.495 "data_size": 63488 00:23:27.495 } 00:23:27.495 ] 00:23:27.495 }' 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.495 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:28.074 "name": "raid_bdev1", 00:23:28.074 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:28.074 "strip_size_kb": 64, 00:23:28.074 "state": "online", 00:23:28.074 "raid_level": "raid5f", 00:23:28.074 "superblock": true, 00:23:28.074 "num_base_bdevs": 4, 00:23:28.074 "num_base_bdevs_discovered": 3, 00:23:28.074 "num_base_bdevs_operational": 3, 00:23:28.074 "base_bdevs_list": [ 00:23:28.074 { 00:23:28.074 "name": null, 00:23:28.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.074 "is_configured": false, 00:23:28.074 "data_offset": 0, 00:23:28.074 "data_size": 63488 00:23:28.074 }, 00:23:28.074 { 00:23:28.074 "name": "BaseBdev2", 00:23:28.074 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:28.074 "is_configured": true, 00:23:28.074 "data_offset": 2048, 00:23:28.074 "data_size": 63488 00:23:28.074 }, 00:23:28.074 { 00:23:28.074 "name": "BaseBdev3", 00:23:28.074 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:28.074 "is_configured": true, 00:23:28.074 "data_offset": 2048, 00:23:28.074 "data_size": 63488 00:23:28.074 }, 00:23:28.074 { 00:23:28.074 "name": "BaseBdev4", 00:23:28.074 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:28.074 "is_configured": true, 00:23:28.074 "data_offset": 2048, 00:23:28.074 "data_size": 63488 00:23:28.074 } 00:23:28.074 ] 00:23:28.074 }' 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:28.074 13:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:28.074 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:28.074 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:28.074 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.074 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.074 [2024-12-06 13:18:15.035232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:28.074 [2024-12-06 13:18:15.049421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:23:28.074 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.075 13:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:28.075 [2024-12-06 13:18:15.058660] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:29.506 "name": "raid_bdev1", 00:23:29.506 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:29.506 "strip_size_kb": 64, 00:23:29.506 "state": "online", 00:23:29.506 "raid_level": "raid5f", 00:23:29.506 "superblock": true, 00:23:29.506 "num_base_bdevs": 4, 00:23:29.506 "num_base_bdevs_discovered": 4, 00:23:29.506 "num_base_bdevs_operational": 4, 00:23:29.506 "process": { 00:23:29.506 "type": "rebuild", 00:23:29.506 "target": "spare", 00:23:29.506 "progress": { 00:23:29.506 "blocks": 17280, 00:23:29.506 "percent": 9 00:23:29.506 } 00:23:29.506 }, 00:23:29.506 "base_bdevs_list": [ 00:23:29.506 { 00:23:29.506 "name": "spare", 00:23:29.506 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:29.506 "is_configured": true, 00:23:29.506 "data_offset": 2048, 00:23:29.506 "data_size": 63488 00:23:29.506 }, 00:23:29.506 { 00:23:29.506 "name": "BaseBdev2", 00:23:29.506 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:29.506 "is_configured": true, 00:23:29.506 "data_offset": 2048, 00:23:29.506 "data_size": 63488 00:23:29.506 }, 00:23:29.506 { 00:23:29.506 "name": "BaseBdev3", 00:23:29.506 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:29.506 "is_configured": true, 00:23:29.506 "data_offset": 2048, 00:23:29.506 "data_size": 63488 00:23:29.506 }, 00:23:29.506 { 00:23:29.506 "name": "BaseBdev4", 00:23:29.506 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:29.506 "is_configured": true, 00:23:29.506 "data_offset": 2048, 00:23:29.506 "data_size": 63488 00:23:29.506 } 00:23:29.506 ] 00:23:29.506 }' 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:29.506 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=707 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:29.506 "name": "raid_bdev1", 00:23:29.506 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:29.506 "strip_size_kb": 64, 00:23:29.506 "state": "online", 00:23:29.506 "raid_level": "raid5f", 00:23:29.506 "superblock": true, 00:23:29.506 "num_base_bdevs": 4, 00:23:29.506 "num_base_bdevs_discovered": 4, 00:23:29.506 "num_base_bdevs_operational": 4, 00:23:29.506 "process": { 00:23:29.506 "type": "rebuild", 00:23:29.506 "target": "spare", 00:23:29.506 "progress": { 00:23:29.506 "blocks": 21120, 00:23:29.506 "percent": 11 00:23:29.506 } 00:23:29.506 }, 00:23:29.506 "base_bdevs_list": [ 00:23:29.506 { 00:23:29.506 "name": "spare", 00:23:29.506 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:29.506 "is_configured": true, 00:23:29.506 "data_offset": 2048, 00:23:29.506 "data_size": 63488 00:23:29.506 }, 00:23:29.506 { 00:23:29.506 "name": "BaseBdev2", 00:23:29.506 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:29.506 "is_configured": true, 00:23:29.506 "data_offset": 2048, 00:23:29.506 "data_size": 63488 00:23:29.506 }, 00:23:29.506 { 00:23:29.506 "name": "BaseBdev3", 00:23:29.506 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:29.506 "is_configured": true, 00:23:29.506 "data_offset": 2048, 00:23:29.506 "data_size": 63488 00:23:29.506 }, 00:23:29.506 { 00:23:29.506 "name": "BaseBdev4", 00:23:29.506 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:29.506 "is_configured": true, 00:23:29.506 "data_offset": 2048, 00:23:29.506 "data_size": 63488 00:23:29.506 } 00:23:29.506 ] 00:23:29.506 }' 00:23:29.506 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:29.507 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:29.507 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:29.507 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:29.507 13:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:30.444 "name": "raid_bdev1", 00:23:30.444 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:30.444 "strip_size_kb": 64, 00:23:30.444 "state": "online", 00:23:30.444 "raid_level": "raid5f", 00:23:30.444 "superblock": true, 00:23:30.444 "num_base_bdevs": 4, 00:23:30.444 "num_base_bdevs_discovered": 4, 00:23:30.444 "num_base_bdevs_operational": 4, 00:23:30.444 "process": { 00:23:30.444 "type": "rebuild", 00:23:30.444 "target": "spare", 00:23:30.444 "progress": { 00:23:30.444 "blocks": 44160, 00:23:30.444 "percent": 23 00:23:30.444 } 00:23:30.444 }, 00:23:30.444 "base_bdevs_list": [ 00:23:30.444 { 00:23:30.444 "name": "spare", 00:23:30.444 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:30.444 "is_configured": true, 00:23:30.444 "data_offset": 2048, 00:23:30.444 "data_size": 63488 00:23:30.444 }, 00:23:30.444 { 00:23:30.444 "name": "BaseBdev2", 00:23:30.444 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:30.444 "is_configured": true, 00:23:30.444 "data_offset": 2048, 00:23:30.444 "data_size": 63488 00:23:30.444 }, 00:23:30.444 { 00:23:30.444 "name": "BaseBdev3", 00:23:30.444 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:30.444 "is_configured": true, 00:23:30.444 "data_offset": 2048, 00:23:30.444 "data_size": 63488 00:23:30.444 }, 00:23:30.444 { 00:23:30.444 "name": "BaseBdev4", 00:23:30.444 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:30.444 "is_configured": true, 00:23:30.444 "data_offset": 2048, 00:23:30.444 "data_size": 63488 00:23:30.444 } 00:23:30.444 ] 00:23:30.444 }' 00:23:30.444 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:30.702 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:30.702 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:30.702 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:30.702 13:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:31.639 "name": "raid_bdev1", 00:23:31.639 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:31.639 "strip_size_kb": 64, 00:23:31.639 "state": "online", 00:23:31.639 "raid_level": "raid5f", 00:23:31.639 "superblock": true, 00:23:31.639 "num_base_bdevs": 4, 00:23:31.639 "num_base_bdevs_discovered": 4, 00:23:31.639 "num_base_bdevs_operational": 4, 00:23:31.639 "process": { 00:23:31.639 "type": "rebuild", 00:23:31.639 "target": "spare", 00:23:31.639 "progress": { 00:23:31.639 "blocks": 65280, 00:23:31.639 "percent": 34 00:23:31.639 } 00:23:31.639 }, 00:23:31.639 "base_bdevs_list": [ 00:23:31.639 { 00:23:31.639 "name": "spare", 00:23:31.639 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:31.639 "is_configured": true, 00:23:31.639 "data_offset": 2048, 00:23:31.639 "data_size": 63488 00:23:31.639 }, 00:23:31.639 { 00:23:31.639 "name": "BaseBdev2", 00:23:31.639 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:31.639 "is_configured": true, 00:23:31.639 "data_offset": 2048, 00:23:31.639 "data_size": 63488 00:23:31.639 }, 00:23:31.639 { 00:23:31.639 "name": "BaseBdev3", 00:23:31.639 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:31.639 "is_configured": true, 00:23:31.639 "data_offset": 2048, 00:23:31.639 "data_size": 63488 00:23:31.639 }, 00:23:31.639 { 00:23:31.639 "name": "BaseBdev4", 00:23:31.639 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:31.639 "is_configured": true, 00:23:31.639 "data_offset": 2048, 00:23:31.639 "data_size": 63488 00:23:31.639 } 00:23:31.639 ] 00:23:31.639 }' 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.639 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:31.898 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.898 13:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:32.835 "name": "raid_bdev1", 00:23:32.835 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:32.835 "strip_size_kb": 64, 00:23:32.835 "state": "online", 00:23:32.835 "raid_level": "raid5f", 00:23:32.835 "superblock": true, 00:23:32.835 "num_base_bdevs": 4, 00:23:32.835 "num_base_bdevs_discovered": 4, 00:23:32.835 "num_base_bdevs_operational": 4, 00:23:32.835 "process": { 00:23:32.835 "type": "rebuild", 00:23:32.835 "target": "spare", 00:23:32.835 "progress": { 00:23:32.835 "blocks": 86400, 00:23:32.835 "percent": 45 00:23:32.835 } 00:23:32.835 }, 00:23:32.835 "base_bdevs_list": [ 00:23:32.835 { 00:23:32.835 "name": "spare", 00:23:32.835 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:32.835 "is_configured": true, 00:23:32.835 "data_offset": 2048, 00:23:32.835 "data_size": 63488 00:23:32.835 }, 00:23:32.835 { 00:23:32.835 "name": "BaseBdev2", 00:23:32.835 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:32.835 "is_configured": true, 00:23:32.835 "data_offset": 2048, 00:23:32.835 "data_size": 63488 00:23:32.835 }, 00:23:32.835 { 00:23:32.835 "name": "BaseBdev3", 00:23:32.835 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:32.835 "is_configured": true, 00:23:32.835 "data_offset": 2048, 00:23:32.835 "data_size": 63488 00:23:32.835 }, 00:23:32.835 { 00:23:32.835 "name": "BaseBdev4", 00:23:32.835 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:32.835 "is_configured": true, 00:23:32.835 "data_offset": 2048, 00:23:32.835 "data_size": 63488 00:23:32.835 } 00:23:32.835 ] 00:23:32.835 }' 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:32.835 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:33.095 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.095 13:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:34.037 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:34.037 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:34.037 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:34.037 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:34.038 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:34.038 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:34.038 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.038 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.038 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.038 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.038 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.038 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:34.038 "name": "raid_bdev1", 00:23:34.038 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:34.038 "strip_size_kb": 64, 00:23:34.038 "state": "online", 00:23:34.038 "raid_level": "raid5f", 00:23:34.038 "superblock": true, 00:23:34.038 "num_base_bdevs": 4, 00:23:34.038 "num_base_bdevs_discovered": 4, 00:23:34.038 "num_base_bdevs_operational": 4, 00:23:34.038 "process": { 00:23:34.038 "type": "rebuild", 00:23:34.038 "target": "spare", 00:23:34.038 "progress": { 00:23:34.038 "blocks": 109440, 00:23:34.038 "percent": 57 00:23:34.038 } 00:23:34.038 }, 00:23:34.038 "base_bdevs_list": [ 00:23:34.038 { 00:23:34.038 "name": "spare", 00:23:34.038 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:34.038 "is_configured": true, 00:23:34.038 "data_offset": 2048, 00:23:34.038 "data_size": 63488 00:23:34.038 }, 00:23:34.038 { 00:23:34.038 "name": "BaseBdev2", 00:23:34.038 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:34.038 "is_configured": true, 00:23:34.038 "data_offset": 2048, 00:23:34.038 "data_size": 63488 00:23:34.038 }, 00:23:34.038 { 00:23:34.038 "name": "BaseBdev3", 00:23:34.038 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:34.038 "is_configured": true, 00:23:34.038 "data_offset": 2048, 00:23:34.038 "data_size": 63488 00:23:34.038 }, 00:23:34.038 { 00:23:34.038 "name": "BaseBdev4", 00:23:34.038 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:34.038 "is_configured": true, 00:23:34.038 "data_offset": 2048, 00:23:34.038 "data_size": 63488 00:23:34.038 } 00:23:34.038 ] 00:23:34.038 }' 00:23:34.038 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:34.038 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:34.038 13:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:34.038 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.038 13:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:35.415 "name": "raid_bdev1", 00:23:35.415 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:35.415 "strip_size_kb": 64, 00:23:35.415 "state": "online", 00:23:35.415 "raid_level": "raid5f", 00:23:35.415 "superblock": true, 00:23:35.415 "num_base_bdevs": 4, 00:23:35.415 "num_base_bdevs_discovered": 4, 00:23:35.415 "num_base_bdevs_operational": 4, 00:23:35.415 "process": { 00:23:35.415 "type": "rebuild", 00:23:35.415 "target": "spare", 00:23:35.415 "progress": { 00:23:35.415 "blocks": 132480, 00:23:35.415 "percent": 69 00:23:35.415 } 00:23:35.415 }, 00:23:35.415 "base_bdevs_list": [ 00:23:35.415 { 00:23:35.415 "name": "spare", 00:23:35.415 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:35.415 "is_configured": true, 00:23:35.415 "data_offset": 2048, 00:23:35.415 "data_size": 63488 00:23:35.415 }, 00:23:35.415 { 00:23:35.415 "name": "BaseBdev2", 00:23:35.415 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:35.415 "is_configured": true, 00:23:35.415 "data_offset": 2048, 00:23:35.415 "data_size": 63488 00:23:35.415 }, 00:23:35.415 { 00:23:35.415 "name": "BaseBdev3", 00:23:35.415 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:35.415 "is_configured": true, 00:23:35.415 "data_offset": 2048, 00:23:35.415 "data_size": 63488 00:23:35.415 }, 00:23:35.415 { 00:23:35.415 "name": "BaseBdev4", 00:23:35.415 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:35.415 "is_configured": true, 00:23:35.415 "data_offset": 2048, 00:23:35.415 "data_size": 63488 00:23:35.415 } 00:23:35.415 ] 00:23:35.415 }' 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:35.415 13:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:36.352 "name": "raid_bdev1", 00:23:36.352 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:36.352 "strip_size_kb": 64, 00:23:36.352 "state": "online", 00:23:36.352 "raid_level": "raid5f", 00:23:36.352 "superblock": true, 00:23:36.352 "num_base_bdevs": 4, 00:23:36.352 "num_base_bdevs_discovered": 4, 00:23:36.352 "num_base_bdevs_operational": 4, 00:23:36.352 "process": { 00:23:36.352 "type": "rebuild", 00:23:36.352 "target": "spare", 00:23:36.352 "progress": { 00:23:36.352 "blocks": 153600, 00:23:36.352 "percent": 80 00:23:36.352 } 00:23:36.352 }, 00:23:36.352 "base_bdevs_list": [ 00:23:36.352 { 00:23:36.352 "name": "spare", 00:23:36.352 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:36.352 "is_configured": true, 00:23:36.352 "data_offset": 2048, 00:23:36.352 "data_size": 63488 00:23:36.352 }, 00:23:36.352 { 00:23:36.352 "name": "BaseBdev2", 00:23:36.352 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:36.352 "is_configured": true, 00:23:36.352 "data_offset": 2048, 00:23:36.352 "data_size": 63488 00:23:36.352 }, 00:23:36.352 { 00:23:36.352 "name": "BaseBdev3", 00:23:36.352 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:36.352 "is_configured": true, 00:23:36.352 "data_offset": 2048, 00:23:36.352 "data_size": 63488 00:23:36.352 }, 00:23:36.352 { 00:23:36.352 "name": "BaseBdev4", 00:23:36.352 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:36.352 "is_configured": true, 00:23:36.352 "data_offset": 2048, 00:23:36.352 "data_size": 63488 00:23:36.352 } 00:23:36.352 ] 00:23:36.352 }' 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:36.352 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.353 13:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:37.732 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:37.733 "name": "raid_bdev1", 00:23:37.733 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:37.733 "strip_size_kb": 64, 00:23:37.733 "state": "online", 00:23:37.733 "raid_level": "raid5f", 00:23:37.733 "superblock": true, 00:23:37.733 "num_base_bdevs": 4, 00:23:37.733 "num_base_bdevs_discovered": 4, 00:23:37.733 "num_base_bdevs_operational": 4, 00:23:37.733 "process": { 00:23:37.733 "type": "rebuild", 00:23:37.733 "target": "spare", 00:23:37.733 "progress": { 00:23:37.733 "blocks": 174720, 00:23:37.733 "percent": 91 00:23:37.733 } 00:23:37.733 }, 00:23:37.733 "base_bdevs_list": [ 00:23:37.733 { 00:23:37.733 "name": "spare", 00:23:37.733 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:37.733 "is_configured": true, 00:23:37.733 "data_offset": 2048, 00:23:37.733 "data_size": 63488 00:23:37.733 }, 00:23:37.733 { 00:23:37.733 "name": "BaseBdev2", 00:23:37.733 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:37.733 "is_configured": true, 00:23:37.733 "data_offset": 2048, 00:23:37.733 "data_size": 63488 00:23:37.733 }, 00:23:37.733 { 00:23:37.733 "name": "BaseBdev3", 00:23:37.733 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:37.733 "is_configured": true, 00:23:37.733 "data_offset": 2048, 00:23:37.733 "data_size": 63488 00:23:37.733 }, 00:23:37.733 { 00:23:37.733 "name": "BaseBdev4", 00:23:37.733 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:37.733 "is_configured": true, 00:23:37.733 "data_offset": 2048, 00:23:37.733 "data_size": 63488 00:23:37.733 } 00:23:37.733 ] 00:23:37.733 }' 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:37.733 13:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:38.299 [2024-12-06 13:18:25.173031] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:38.299 [2024-12-06 13:18:25.173159] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:38.299 [2024-12-06 13:18:25.173388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.557 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:38.557 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.557 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:38.557 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:38.557 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:38.557 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:38.557 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.557 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.557 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.558 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.558 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.816 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:38.816 "name": "raid_bdev1", 00:23:38.816 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:38.816 "strip_size_kb": 64, 00:23:38.816 "state": "online", 00:23:38.816 "raid_level": "raid5f", 00:23:38.816 "superblock": true, 00:23:38.816 "num_base_bdevs": 4, 00:23:38.816 "num_base_bdevs_discovered": 4, 00:23:38.816 "num_base_bdevs_operational": 4, 00:23:38.816 "base_bdevs_list": [ 00:23:38.816 { 00:23:38.816 "name": "spare", 00:23:38.816 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:38.816 "is_configured": true, 00:23:38.816 "data_offset": 2048, 00:23:38.816 "data_size": 63488 00:23:38.816 }, 00:23:38.816 { 00:23:38.816 "name": "BaseBdev2", 00:23:38.816 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:38.816 "is_configured": true, 00:23:38.816 "data_offset": 2048, 00:23:38.816 "data_size": 63488 00:23:38.816 }, 00:23:38.816 { 00:23:38.817 "name": "BaseBdev3", 00:23:38.817 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:38.817 "is_configured": true, 00:23:38.817 "data_offset": 2048, 00:23:38.817 "data_size": 63488 00:23:38.817 }, 00:23:38.817 { 00:23:38.817 "name": "BaseBdev4", 00:23:38.817 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:38.817 "is_configured": true, 00:23:38.817 "data_offset": 2048, 00:23:38.817 "data_size": 63488 00:23:38.817 } 00:23:38.817 ] 00:23:38.817 }' 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:38.817 "name": "raid_bdev1", 00:23:38.817 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:38.817 "strip_size_kb": 64, 00:23:38.817 "state": "online", 00:23:38.817 "raid_level": "raid5f", 00:23:38.817 "superblock": true, 00:23:38.817 "num_base_bdevs": 4, 00:23:38.817 "num_base_bdevs_discovered": 4, 00:23:38.817 "num_base_bdevs_operational": 4, 00:23:38.817 "base_bdevs_list": [ 00:23:38.817 { 00:23:38.817 "name": "spare", 00:23:38.817 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:38.817 "is_configured": true, 00:23:38.817 "data_offset": 2048, 00:23:38.817 "data_size": 63488 00:23:38.817 }, 00:23:38.817 { 00:23:38.817 "name": "BaseBdev2", 00:23:38.817 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:38.817 "is_configured": true, 00:23:38.817 "data_offset": 2048, 00:23:38.817 "data_size": 63488 00:23:38.817 }, 00:23:38.817 { 00:23:38.817 "name": "BaseBdev3", 00:23:38.817 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:38.817 "is_configured": true, 00:23:38.817 "data_offset": 2048, 00:23:38.817 "data_size": 63488 00:23:38.817 }, 00:23:38.817 { 00:23:38.817 "name": "BaseBdev4", 00:23:38.817 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:38.817 "is_configured": true, 00:23:38.817 "data_offset": 2048, 00:23:38.817 "data_size": 63488 00:23:38.817 } 00:23:38.817 ] 00:23:38.817 }' 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:38.817 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.074 "name": "raid_bdev1", 00:23:39.074 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:39.074 "strip_size_kb": 64, 00:23:39.074 "state": "online", 00:23:39.074 "raid_level": "raid5f", 00:23:39.074 "superblock": true, 00:23:39.074 "num_base_bdevs": 4, 00:23:39.074 "num_base_bdevs_discovered": 4, 00:23:39.074 "num_base_bdevs_operational": 4, 00:23:39.074 "base_bdevs_list": [ 00:23:39.074 { 00:23:39.074 "name": "spare", 00:23:39.074 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:39.074 "is_configured": true, 00:23:39.074 "data_offset": 2048, 00:23:39.074 "data_size": 63488 00:23:39.074 }, 00:23:39.074 { 00:23:39.074 "name": "BaseBdev2", 00:23:39.074 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:39.074 "is_configured": true, 00:23:39.074 "data_offset": 2048, 00:23:39.074 "data_size": 63488 00:23:39.074 }, 00:23:39.074 { 00:23:39.074 "name": "BaseBdev3", 00:23:39.074 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:39.074 "is_configured": true, 00:23:39.074 "data_offset": 2048, 00:23:39.074 "data_size": 63488 00:23:39.074 }, 00:23:39.074 { 00:23:39.074 "name": "BaseBdev4", 00:23:39.074 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:39.074 "is_configured": true, 00:23:39.074 "data_offset": 2048, 00:23:39.074 "data_size": 63488 00:23:39.074 } 00:23:39.074 ] 00:23:39.074 }' 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.074 13:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.332 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:39.332 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.332 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.333 [2024-12-06 13:18:26.339435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:39.333 [2024-12-06 13:18:26.339504] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.333 [2024-12-06 13:18:26.339640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.333 [2024-12-06 13:18:26.339782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.333 [2024-12-06 13:18:26.339831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:39.333 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.333 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:39.592 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:39.851 /dev/nbd0 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:39.851 1+0 records in 00:23:39.851 1+0 records out 00:23:39.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054518 s, 7.5 MB/s 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:39.851 13:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:40.109 /dev/nbd1 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:40.109 1+0 records in 00:23:40.109 1+0 records out 00:23:40.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373268 s, 11.0 MB/s 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:40.109 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:40.367 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:40.367 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:40.367 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:40.367 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:40.367 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:40.367 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.367 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:40.625 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:40.625 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:40.625 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:40.625 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.625 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.625 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:40.625 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:40.625 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.625 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.625 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.882 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.140 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.140 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:41.140 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.140 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.140 [2024-12-06 13:18:27.905857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:41.140 [2024-12-06 13:18:27.906133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.140 [2024-12-06 13:18:27.906185] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:41.140 [2024-12-06 13:18:27.906203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.140 [2024-12-06 13:18:27.909549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.140 [2024-12-06 13:18:27.909595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:41.140 [2024-12-06 13:18:27.909720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:41.140 [2024-12-06 13:18:27.909802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:41.140 [2024-12-06 13:18:27.909993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:41.140 [2024-12-06 13:18:27.910211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:41.140 [2024-12-06 13:18:27.910365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:41.140 spare 00:23:41.140 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.140 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:41.140 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.140 13:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.140 [2024-12-06 13:18:28.010537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:41.140 [2024-12-06 13:18:28.010817] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:41.140 [2024-12-06 13:18:28.011304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:23:41.140 [2024-12-06 13:18:28.017852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:41.140 [2024-12-06 13:18:28.018029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:41.140 [2024-12-06 13:18:28.018347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.141 "name": "raid_bdev1", 00:23:41.141 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:41.141 "strip_size_kb": 64, 00:23:41.141 "state": "online", 00:23:41.141 "raid_level": "raid5f", 00:23:41.141 "superblock": true, 00:23:41.141 "num_base_bdevs": 4, 00:23:41.141 "num_base_bdevs_discovered": 4, 00:23:41.141 "num_base_bdevs_operational": 4, 00:23:41.141 "base_bdevs_list": [ 00:23:41.141 { 00:23:41.141 "name": "spare", 00:23:41.141 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:41.141 "is_configured": true, 00:23:41.141 "data_offset": 2048, 00:23:41.141 "data_size": 63488 00:23:41.141 }, 00:23:41.141 { 00:23:41.141 "name": "BaseBdev2", 00:23:41.141 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:41.141 "is_configured": true, 00:23:41.141 "data_offset": 2048, 00:23:41.141 "data_size": 63488 00:23:41.141 }, 00:23:41.141 { 00:23:41.141 "name": "BaseBdev3", 00:23:41.141 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:41.141 "is_configured": true, 00:23:41.141 "data_offset": 2048, 00:23:41.141 "data_size": 63488 00:23:41.141 }, 00:23:41.141 { 00:23:41.141 "name": "BaseBdev4", 00:23:41.141 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:41.141 "is_configured": true, 00:23:41.141 "data_offset": 2048, 00:23:41.141 "data_size": 63488 00:23:41.141 } 00:23:41.141 ] 00:23:41.141 }' 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.141 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:41.706 "name": "raid_bdev1", 00:23:41.706 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:41.706 "strip_size_kb": 64, 00:23:41.706 "state": "online", 00:23:41.706 "raid_level": "raid5f", 00:23:41.706 "superblock": true, 00:23:41.706 "num_base_bdevs": 4, 00:23:41.706 "num_base_bdevs_discovered": 4, 00:23:41.706 "num_base_bdevs_operational": 4, 00:23:41.706 "base_bdevs_list": [ 00:23:41.706 { 00:23:41.706 "name": "spare", 00:23:41.706 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:41.706 "is_configured": true, 00:23:41.706 "data_offset": 2048, 00:23:41.706 "data_size": 63488 00:23:41.706 }, 00:23:41.706 { 00:23:41.706 "name": "BaseBdev2", 00:23:41.706 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:41.706 "is_configured": true, 00:23:41.706 "data_offset": 2048, 00:23:41.706 "data_size": 63488 00:23:41.706 }, 00:23:41.706 { 00:23:41.706 "name": "BaseBdev3", 00:23:41.706 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:41.706 "is_configured": true, 00:23:41.706 "data_offset": 2048, 00:23:41.706 "data_size": 63488 00:23:41.706 }, 00:23:41.706 { 00:23:41.706 "name": "BaseBdev4", 00:23:41.706 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:41.706 "is_configured": true, 00:23:41.706 "data_offset": 2048, 00:23:41.706 "data_size": 63488 00:23:41.706 } 00:23:41.706 ] 00:23:41.706 }' 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:41.706 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.965 [2024-12-06 13:18:28.790465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.965 "name": "raid_bdev1", 00:23:41.965 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:41.965 "strip_size_kb": 64, 00:23:41.965 "state": "online", 00:23:41.965 "raid_level": "raid5f", 00:23:41.965 "superblock": true, 00:23:41.965 "num_base_bdevs": 4, 00:23:41.965 "num_base_bdevs_discovered": 3, 00:23:41.965 "num_base_bdevs_operational": 3, 00:23:41.965 "base_bdevs_list": [ 00:23:41.965 { 00:23:41.965 "name": null, 00:23:41.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.965 "is_configured": false, 00:23:41.965 "data_offset": 0, 00:23:41.965 "data_size": 63488 00:23:41.965 }, 00:23:41.965 { 00:23:41.965 "name": "BaseBdev2", 00:23:41.965 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:41.965 "is_configured": true, 00:23:41.965 "data_offset": 2048, 00:23:41.965 "data_size": 63488 00:23:41.965 }, 00:23:41.965 { 00:23:41.965 "name": "BaseBdev3", 00:23:41.965 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:41.965 "is_configured": true, 00:23:41.965 "data_offset": 2048, 00:23:41.965 "data_size": 63488 00:23:41.965 }, 00:23:41.965 { 00:23:41.965 "name": "BaseBdev4", 00:23:41.965 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:41.965 "is_configured": true, 00:23:41.965 "data_offset": 2048, 00:23:41.965 "data_size": 63488 00:23:41.965 } 00:23:41.965 ] 00:23:41.965 }' 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.965 13:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.532 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:42.532 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.532 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.532 [2024-12-06 13:18:29.314711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:42.532 [2024-12-06 13:18:29.315299] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:42.532 [2024-12-06 13:18:29.315481] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:42.532 [2024-12-06 13:18:29.315542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:42.532 [2024-12-06 13:18:29.329734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:23:42.532 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.532 13:18:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:42.532 [2024-12-06 13:18:29.339119] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:43.467 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.467 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.467 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:43.467 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:43.467 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.468 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.468 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.468 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.468 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.468 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.468 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.468 "name": "raid_bdev1", 00:23:43.468 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:43.468 "strip_size_kb": 64, 00:23:43.468 "state": "online", 00:23:43.468 "raid_level": "raid5f", 00:23:43.468 "superblock": true, 00:23:43.468 "num_base_bdevs": 4, 00:23:43.468 "num_base_bdevs_discovered": 4, 00:23:43.468 "num_base_bdevs_operational": 4, 00:23:43.468 "process": { 00:23:43.468 "type": "rebuild", 00:23:43.468 "target": "spare", 00:23:43.468 "progress": { 00:23:43.468 "blocks": 17280, 00:23:43.468 "percent": 9 00:23:43.468 } 00:23:43.468 }, 00:23:43.468 "base_bdevs_list": [ 00:23:43.468 { 00:23:43.468 "name": "spare", 00:23:43.468 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:43.468 "is_configured": true, 00:23:43.468 "data_offset": 2048, 00:23:43.468 "data_size": 63488 00:23:43.468 }, 00:23:43.468 { 00:23:43.468 "name": "BaseBdev2", 00:23:43.468 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:43.468 "is_configured": true, 00:23:43.468 "data_offset": 2048, 00:23:43.468 "data_size": 63488 00:23:43.468 }, 00:23:43.468 { 00:23:43.468 "name": "BaseBdev3", 00:23:43.468 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:43.468 "is_configured": true, 00:23:43.468 "data_offset": 2048, 00:23:43.468 "data_size": 63488 00:23:43.468 }, 00:23:43.468 { 00:23:43.468 "name": "BaseBdev4", 00:23:43.468 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:43.468 "is_configured": true, 00:23:43.468 "data_offset": 2048, 00:23:43.468 "data_size": 63488 00:23:43.468 } 00:23:43.468 ] 00:23:43.468 }' 00:23:43.468 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.468 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.468 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.728 [2024-12-06 13:18:30.493178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:43.728 [2024-12-06 13:18:30.554287] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:43.728 [2024-12-06 13:18:30.554591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.728 [2024-12-06 13:18:30.554624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:43.728 [2024-12-06 13:18:30.554641] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.728 "name": "raid_bdev1", 00:23:43.728 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:43.728 "strip_size_kb": 64, 00:23:43.728 "state": "online", 00:23:43.728 "raid_level": "raid5f", 00:23:43.728 "superblock": true, 00:23:43.728 "num_base_bdevs": 4, 00:23:43.728 "num_base_bdevs_discovered": 3, 00:23:43.728 "num_base_bdevs_operational": 3, 00:23:43.728 "base_bdevs_list": [ 00:23:43.728 { 00:23:43.728 "name": null, 00:23:43.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.728 "is_configured": false, 00:23:43.728 "data_offset": 0, 00:23:43.728 "data_size": 63488 00:23:43.728 }, 00:23:43.728 { 00:23:43.728 "name": "BaseBdev2", 00:23:43.728 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:43.728 "is_configured": true, 00:23:43.728 "data_offset": 2048, 00:23:43.728 "data_size": 63488 00:23:43.728 }, 00:23:43.728 { 00:23:43.728 "name": "BaseBdev3", 00:23:43.728 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:43.728 "is_configured": true, 00:23:43.728 "data_offset": 2048, 00:23:43.728 "data_size": 63488 00:23:43.728 }, 00:23:43.728 { 00:23:43.728 "name": "BaseBdev4", 00:23:43.728 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:43.728 "is_configured": true, 00:23:43.728 "data_offset": 2048, 00:23:43.728 "data_size": 63488 00:23:43.728 } 00:23:43.728 ] 00:23:43.728 }' 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.728 13:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.294 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:44.294 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.294 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.294 [2024-12-06 13:18:31.104496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:44.294 [2024-12-06 13:18:31.104840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.294 [2024-12-06 13:18:31.104931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:44.294 [2024-12-06 13:18:31.105137] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.294 [2024-12-06 13:18:31.105938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.294 [2024-12-06 13:18:31.105980] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:44.294 [2024-12-06 13:18:31.106116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:44.294 [2024-12-06 13:18:31.106145] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:44.294 [2024-12-06 13:18:31.106164] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:44.294 [2024-12-06 13:18:31.106198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:44.294 [2024-12-06 13:18:31.120169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:23:44.294 spare 00:23:44.294 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.294 13:18:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:44.294 [2024-12-06 13:18:31.129125] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:45.230 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.230 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:45.230 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:45.230 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:45.230 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:45.230 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.230 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.230 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.230 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.230 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.230 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:45.230 "name": "raid_bdev1", 00:23:45.230 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:45.230 "strip_size_kb": 64, 00:23:45.230 "state": "online", 00:23:45.230 "raid_level": "raid5f", 00:23:45.230 "superblock": true, 00:23:45.230 "num_base_bdevs": 4, 00:23:45.230 "num_base_bdevs_discovered": 4, 00:23:45.230 "num_base_bdevs_operational": 4, 00:23:45.230 "process": { 00:23:45.230 "type": "rebuild", 00:23:45.230 "target": "spare", 00:23:45.230 "progress": { 00:23:45.230 "blocks": 17280, 00:23:45.230 "percent": 9 00:23:45.230 } 00:23:45.230 }, 00:23:45.230 "base_bdevs_list": [ 00:23:45.230 { 00:23:45.230 "name": "spare", 00:23:45.230 "uuid": "975ee481-0648-5b2b-8b6e-bd84780f2d72", 00:23:45.230 "is_configured": true, 00:23:45.230 "data_offset": 2048, 00:23:45.230 "data_size": 63488 00:23:45.230 }, 00:23:45.230 { 00:23:45.230 "name": "BaseBdev2", 00:23:45.230 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:45.230 "is_configured": true, 00:23:45.230 "data_offset": 2048, 00:23:45.230 "data_size": 63488 00:23:45.230 }, 00:23:45.230 { 00:23:45.230 "name": "BaseBdev3", 00:23:45.230 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:45.230 "is_configured": true, 00:23:45.230 "data_offset": 2048, 00:23:45.230 "data_size": 63488 00:23:45.230 }, 00:23:45.230 { 00:23:45.230 "name": "BaseBdev4", 00:23:45.230 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:45.230 "is_configured": true, 00:23:45.230 "data_offset": 2048, 00:23:45.230 "data_size": 63488 00:23:45.230 } 00:23:45.230 ] 00:23:45.230 }' 00:23:45.231 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:45.231 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:45.231 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.490 [2024-12-06 13:18:32.290688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:45.490 [2024-12-06 13:18:32.341652] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:45.490 [2024-12-06 13:18:32.341723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.490 [2024-12-06 13:18:32.341751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:45.490 [2024-12-06 13:18:32.341762] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:45.490 "name": "raid_bdev1", 00:23:45.490 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:45.490 "strip_size_kb": 64, 00:23:45.490 "state": "online", 00:23:45.490 "raid_level": "raid5f", 00:23:45.490 "superblock": true, 00:23:45.490 "num_base_bdevs": 4, 00:23:45.490 "num_base_bdevs_discovered": 3, 00:23:45.490 "num_base_bdevs_operational": 3, 00:23:45.490 "base_bdevs_list": [ 00:23:45.490 { 00:23:45.490 "name": null, 00:23:45.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.490 "is_configured": false, 00:23:45.490 "data_offset": 0, 00:23:45.490 "data_size": 63488 00:23:45.490 }, 00:23:45.490 { 00:23:45.490 "name": "BaseBdev2", 00:23:45.490 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:45.490 "is_configured": true, 00:23:45.490 "data_offset": 2048, 00:23:45.490 "data_size": 63488 00:23:45.490 }, 00:23:45.490 { 00:23:45.490 "name": "BaseBdev3", 00:23:45.490 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:45.490 "is_configured": true, 00:23:45.490 "data_offset": 2048, 00:23:45.490 "data_size": 63488 00:23:45.490 }, 00:23:45.490 { 00:23:45.490 "name": "BaseBdev4", 00:23:45.490 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:45.490 "is_configured": true, 00:23:45.490 "data_offset": 2048, 00:23:45.490 "data_size": 63488 00:23:45.490 } 00:23:45.490 ] 00:23:45.490 }' 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:45.490 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:46.058 "name": "raid_bdev1", 00:23:46.058 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:46.058 "strip_size_kb": 64, 00:23:46.058 "state": "online", 00:23:46.058 "raid_level": "raid5f", 00:23:46.058 "superblock": true, 00:23:46.058 "num_base_bdevs": 4, 00:23:46.058 "num_base_bdevs_discovered": 3, 00:23:46.058 "num_base_bdevs_operational": 3, 00:23:46.058 "base_bdevs_list": [ 00:23:46.058 { 00:23:46.058 "name": null, 00:23:46.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.058 "is_configured": false, 00:23:46.058 "data_offset": 0, 00:23:46.058 "data_size": 63488 00:23:46.058 }, 00:23:46.058 { 00:23:46.058 "name": "BaseBdev2", 00:23:46.058 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:46.058 "is_configured": true, 00:23:46.058 "data_offset": 2048, 00:23:46.058 "data_size": 63488 00:23:46.058 }, 00:23:46.058 { 00:23:46.058 "name": "BaseBdev3", 00:23:46.058 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:46.058 "is_configured": true, 00:23:46.058 "data_offset": 2048, 00:23:46.058 "data_size": 63488 00:23:46.058 }, 00:23:46.058 { 00:23:46.058 "name": "BaseBdev4", 00:23:46.058 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:46.058 "is_configured": true, 00:23:46.058 "data_offset": 2048, 00:23:46.058 "data_size": 63488 00:23:46.058 } 00:23:46.058 ] 00:23:46.058 }' 00:23:46.058 13:18:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:46.058 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:46.058 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:46.317 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:46.317 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:46.317 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.317 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.317 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.317 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:46.317 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.317 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.317 [2024-12-06 13:18:33.096465] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:46.317 [2024-12-06 13:18:33.096591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.317 [2024-12-06 13:18:33.096632] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:23:46.317 [2024-12-06 13:18:33.096648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.317 [2024-12-06 13:18:33.097370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.317 [2024-12-06 13:18:33.097421] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:46.317 [2024-12-06 13:18:33.097583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:46.317 [2024-12-06 13:18:33.097617] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:46.317 [2024-12-06 13:18:33.097637] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:46.317 [2024-12-06 13:18:33.097653] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:46.317 BaseBdev1 00:23:46.317 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.317 13:18:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.252 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.253 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.253 "name": "raid_bdev1", 00:23:47.253 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:47.253 "strip_size_kb": 64, 00:23:47.253 "state": "online", 00:23:47.253 "raid_level": "raid5f", 00:23:47.253 "superblock": true, 00:23:47.253 "num_base_bdevs": 4, 00:23:47.253 "num_base_bdevs_discovered": 3, 00:23:47.253 "num_base_bdevs_operational": 3, 00:23:47.253 "base_bdevs_list": [ 00:23:47.253 { 00:23:47.253 "name": null, 00:23:47.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.253 "is_configured": false, 00:23:47.253 "data_offset": 0, 00:23:47.253 "data_size": 63488 00:23:47.253 }, 00:23:47.253 { 00:23:47.253 "name": "BaseBdev2", 00:23:47.253 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:47.253 "is_configured": true, 00:23:47.253 "data_offset": 2048, 00:23:47.253 "data_size": 63488 00:23:47.253 }, 00:23:47.253 { 00:23:47.253 "name": "BaseBdev3", 00:23:47.253 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:47.253 "is_configured": true, 00:23:47.253 "data_offset": 2048, 00:23:47.253 "data_size": 63488 00:23:47.253 }, 00:23:47.253 { 00:23:47.253 "name": "BaseBdev4", 00:23:47.253 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:47.253 "is_configured": true, 00:23:47.253 "data_offset": 2048, 00:23:47.253 "data_size": 63488 00:23:47.253 } 00:23:47.253 ] 00:23:47.253 }' 00:23:47.253 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.253 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.821 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:47.821 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:47.821 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:47.821 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:47.821 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:47.821 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.821 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.821 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.821 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.821 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.821 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:47.821 "name": "raid_bdev1", 00:23:47.821 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:47.822 "strip_size_kb": 64, 00:23:47.822 "state": "online", 00:23:47.822 "raid_level": "raid5f", 00:23:47.822 "superblock": true, 00:23:47.822 "num_base_bdevs": 4, 00:23:47.822 "num_base_bdevs_discovered": 3, 00:23:47.822 "num_base_bdevs_operational": 3, 00:23:47.822 "base_bdevs_list": [ 00:23:47.822 { 00:23:47.822 "name": null, 00:23:47.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.822 "is_configured": false, 00:23:47.822 "data_offset": 0, 00:23:47.822 "data_size": 63488 00:23:47.822 }, 00:23:47.822 { 00:23:47.822 "name": "BaseBdev2", 00:23:47.822 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:47.822 "is_configured": true, 00:23:47.822 "data_offset": 2048, 00:23:47.822 "data_size": 63488 00:23:47.822 }, 00:23:47.822 { 00:23:47.822 "name": "BaseBdev3", 00:23:47.822 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:47.822 "is_configured": true, 00:23:47.822 "data_offset": 2048, 00:23:47.822 "data_size": 63488 00:23:47.822 }, 00:23:47.822 { 00:23:47.822 "name": "BaseBdev4", 00:23:47.822 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:47.822 "is_configured": true, 00:23:47.822 "data_offset": 2048, 00:23:47.822 "data_size": 63488 00:23:47.822 } 00:23:47.822 ] 00:23:47.822 }' 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.822 [2024-12-06 13:18:34.753115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:47.822 [2024-12-06 13:18:34.753424] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:47.822 [2024-12-06 13:18:34.753450] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:47.822 request: 00:23:47.822 { 00:23:47.822 "base_bdev": "BaseBdev1", 00:23:47.822 "raid_bdev": "raid_bdev1", 00:23:47.822 "method": "bdev_raid_add_base_bdev", 00:23:47.822 "req_id": 1 00:23:47.822 } 00:23:47.822 Got JSON-RPC error response 00:23:47.822 response: 00:23:47.822 { 00:23:47.822 "code": -22, 00:23:47.822 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:47.822 } 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:47.822 13:18:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:48.756 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.757 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.015 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.015 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.015 "name": "raid_bdev1", 00:23:49.015 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:49.015 "strip_size_kb": 64, 00:23:49.015 "state": "online", 00:23:49.015 "raid_level": "raid5f", 00:23:49.015 "superblock": true, 00:23:49.015 "num_base_bdevs": 4, 00:23:49.015 "num_base_bdevs_discovered": 3, 00:23:49.015 "num_base_bdevs_operational": 3, 00:23:49.015 "base_bdevs_list": [ 00:23:49.015 { 00:23:49.015 "name": null, 00:23:49.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.015 "is_configured": false, 00:23:49.015 "data_offset": 0, 00:23:49.015 "data_size": 63488 00:23:49.015 }, 00:23:49.015 { 00:23:49.015 "name": "BaseBdev2", 00:23:49.015 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:49.015 "is_configured": true, 00:23:49.015 "data_offset": 2048, 00:23:49.015 "data_size": 63488 00:23:49.015 }, 00:23:49.015 { 00:23:49.015 "name": "BaseBdev3", 00:23:49.015 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:49.015 "is_configured": true, 00:23:49.015 "data_offset": 2048, 00:23:49.015 "data_size": 63488 00:23:49.015 }, 00:23:49.015 { 00:23:49.015 "name": "BaseBdev4", 00:23:49.015 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:49.015 "is_configured": true, 00:23:49.015 "data_offset": 2048, 00:23:49.015 "data_size": 63488 00:23:49.015 } 00:23:49.015 ] 00:23:49.015 }' 00:23:49.015 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.015 13:18:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.582 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:49.582 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:49.582 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:49.582 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:49.582 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:49.582 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.582 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.582 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.582 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.582 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.582 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:49.582 "name": "raid_bdev1", 00:23:49.582 "uuid": "b21812c1-04f9-4e28-8644-bbf4f70af91f", 00:23:49.582 "strip_size_kb": 64, 00:23:49.582 "state": "online", 00:23:49.582 "raid_level": "raid5f", 00:23:49.582 "superblock": true, 00:23:49.582 "num_base_bdevs": 4, 00:23:49.582 "num_base_bdevs_discovered": 3, 00:23:49.582 "num_base_bdevs_operational": 3, 00:23:49.582 "base_bdevs_list": [ 00:23:49.582 { 00:23:49.582 "name": null, 00:23:49.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.582 "is_configured": false, 00:23:49.582 "data_offset": 0, 00:23:49.582 "data_size": 63488 00:23:49.582 }, 00:23:49.582 { 00:23:49.582 "name": "BaseBdev2", 00:23:49.582 "uuid": "43d97263-ca28-5045-9530-d2a7b781b867", 00:23:49.582 "is_configured": true, 00:23:49.582 "data_offset": 2048, 00:23:49.582 "data_size": 63488 00:23:49.582 }, 00:23:49.582 { 00:23:49.582 "name": "BaseBdev3", 00:23:49.582 "uuid": "924dc1d8-c9bf-5923-924c-3ccb2277009d", 00:23:49.582 "is_configured": true, 00:23:49.583 "data_offset": 2048, 00:23:49.583 "data_size": 63488 00:23:49.583 }, 00:23:49.583 { 00:23:49.583 "name": "BaseBdev4", 00:23:49.583 "uuid": "370a577b-21a0-56ee-9f28-28172ab48bbd", 00:23:49.583 "is_configured": true, 00:23:49.583 "data_offset": 2048, 00:23:49.583 "data_size": 63488 00:23:49.583 } 00:23:49.583 ] 00:23:49.583 }' 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85880 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85880 ']' 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85880 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85880 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.583 killing process with pid 85880 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85880' 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85880 00:23:49.583 Received shutdown signal, test time was about 60.000000 seconds 00:23:49.583 00:23:49.583 Latency(us) 00:23:49.583 [2024-12-06T13:18:36.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.583 [2024-12-06T13:18:36.599Z] =================================================================================================================== 00:23:49.583 [2024-12-06T13:18:36.599Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.583 13:18:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85880 00:23:49.583 [2024-12-06 13:18:36.488360] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:49.583 [2024-12-06 13:18:36.488579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:49.583 [2024-12-06 13:18:36.488696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:49.583 [2024-12-06 13:18:36.488719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:50.150 [2024-12-06 13:18:36.954985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:51.086 ************************************ 00:23:51.086 END TEST raid5f_rebuild_test_sb 00:23:51.086 ************************************ 00:23:51.086 13:18:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:23:51.086 00:23:51.086 real 0m28.839s 00:23:51.086 user 0m37.364s 00:23:51.086 sys 0m2.984s 00:23:51.086 13:18:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.086 13:18:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.345 13:18:38 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:23:51.345 13:18:38 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:23:51.345 13:18:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:51.345 13:18:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.345 13:18:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:51.345 ************************************ 00:23:51.345 START TEST raid_state_function_test_sb_4k 00:23:51.345 ************************************ 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:51.345 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86730 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86730' 00:23:51.346 Process raid pid: 86730 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86730 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86730 ']' 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.346 13:18:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:51.346 [2024-12-06 13:18:38.278576] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:51.346 [2024-12-06 13:18:38.278781] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.605 [2024-12-06 13:18:38.467041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.864 [2024-12-06 13:18:38.618661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.124 [2024-12-06 13:18:38.891830] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:52.124 [2024-12-06 13:18:38.891877] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:52.383 [2024-12-06 13:18:39.303439] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:52.383 [2024-12-06 13:18:39.303553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:52.383 [2024-12-06 13:18:39.303573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:52.383 [2024-12-06 13:18:39.303591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:52.383 "name": "Existed_Raid", 00:23:52.383 "uuid": "90025ed8-ce19-47bd-9d4e-703d72eb2d81", 00:23:52.383 "strip_size_kb": 0, 00:23:52.383 "state": "configuring", 00:23:52.383 "raid_level": "raid1", 00:23:52.383 "superblock": true, 00:23:52.383 "num_base_bdevs": 2, 00:23:52.383 "num_base_bdevs_discovered": 0, 00:23:52.383 "num_base_bdevs_operational": 2, 00:23:52.383 "base_bdevs_list": [ 00:23:52.383 { 00:23:52.383 "name": "BaseBdev1", 00:23:52.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.383 "is_configured": false, 00:23:52.383 "data_offset": 0, 00:23:52.383 "data_size": 0 00:23:52.383 }, 00:23:52.383 { 00:23:52.383 "name": "BaseBdev2", 00:23:52.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.383 "is_configured": false, 00:23:52.383 "data_offset": 0, 00:23:52.383 "data_size": 0 00:23:52.383 } 00:23:52.383 ] 00:23:52.383 }' 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:52.383 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:52.951 [2024-12-06 13:18:39.815533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:52.951 [2024-12-06 13:18:39.815583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:52.951 [2024-12-06 13:18:39.827507] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:52.951 [2024-12-06 13:18:39.827707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:52.951 [2024-12-06 13:18:39.827876] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:52.951 [2024-12-06 13:18:39.828022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:52.951 [2024-12-06 13:18:39.880632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:52.951 BaseBdev1 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:52.951 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:52.952 [ 00:23:52.952 { 00:23:52.952 "name": "BaseBdev1", 00:23:52.952 "aliases": [ 00:23:52.952 "a5511a30-b40c-41ff-8249-ae0e11dafa82" 00:23:52.952 ], 00:23:52.952 "product_name": "Malloc disk", 00:23:52.952 "block_size": 4096, 00:23:52.952 "num_blocks": 8192, 00:23:52.952 "uuid": "a5511a30-b40c-41ff-8249-ae0e11dafa82", 00:23:52.952 "assigned_rate_limits": { 00:23:52.952 "rw_ios_per_sec": 0, 00:23:52.952 "rw_mbytes_per_sec": 0, 00:23:52.952 "r_mbytes_per_sec": 0, 00:23:52.952 "w_mbytes_per_sec": 0 00:23:52.952 }, 00:23:52.952 "claimed": true, 00:23:52.952 "claim_type": "exclusive_write", 00:23:52.952 "zoned": false, 00:23:52.952 "supported_io_types": { 00:23:52.952 "read": true, 00:23:52.952 "write": true, 00:23:52.952 "unmap": true, 00:23:52.952 "flush": true, 00:23:52.952 "reset": true, 00:23:52.952 "nvme_admin": false, 00:23:52.952 "nvme_io": false, 00:23:52.952 "nvme_io_md": false, 00:23:52.952 "write_zeroes": true, 00:23:52.952 "zcopy": true, 00:23:52.952 "get_zone_info": false, 00:23:52.952 "zone_management": false, 00:23:52.952 "zone_append": false, 00:23:52.952 "compare": false, 00:23:52.952 "compare_and_write": false, 00:23:52.952 "abort": true, 00:23:52.952 "seek_hole": false, 00:23:52.952 "seek_data": false, 00:23:52.952 "copy": true, 00:23:52.952 "nvme_iov_md": false 00:23:52.952 }, 00:23:52.952 "memory_domains": [ 00:23:52.952 { 00:23:52.952 "dma_device_id": "system", 00:23:52.952 "dma_device_type": 1 00:23:52.952 }, 00:23:52.952 { 00:23:52.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.952 "dma_device_type": 2 00:23:52.952 } 00:23:52.952 ], 00:23:52.952 "driver_specific": {} 00:23:52.952 } 00:23:52.952 ] 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:52.952 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.211 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.211 "name": "Existed_Raid", 00:23:53.211 "uuid": "3f76956d-05df-45e8-853c-f8fdfc953eb4", 00:23:53.211 "strip_size_kb": 0, 00:23:53.211 "state": "configuring", 00:23:53.211 "raid_level": "raid1", 00:23:53.211 "superblock": true, 00:23:53.211 "num_base_bdevs": 2, 00:23:53.211 "num_base_bdevs_discovered": 1, 00:23:53.211 "num_base_bdevs_operational": 2, 00:23:53.211 "base_bdevs_list": [ 00:23:53.211 { 00:23:53.211 "name": "BaseBdev1", 00:23:53.211 "uuid": "a5511a30-b40c-41ff-8249-ae0e11dafa82", 00:23:53.211 "is_configured": true, 00:23:53.211 "data_offset": 256, 00:23:53.211 "data_size": 7936 00:23:53.211 }, 00:23:53.211 { 00:23:53.211 "name": "BaseBdev2", 00:23:53.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.211 "is_configured": false, 00:23:53.211 "data_offset": 0, 00:23:53.211 "data_size": 0 00:23:53.211 } 00:23:53.211 ] 00:23:53.211 }' 00:23:53.211 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.211 13:18:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:53.469 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:53.469 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.469 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:53.469 [2024-12-06 13:18:40.432896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:53.469 [2024-12-06 13:18:40.432972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:53.470 [2024-12-06 13:18:40.440920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:53.470 [2024-12-06 13:18:40.443702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:53.470 [2024-12-06 13:18:40.443752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:53.470 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.728 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.728 "name": "Existed_Raid", 00:23:53.728 "uuid": "19cef804-c7d2-42c1-912c-ac07afa21b17", 00:23:53.728 "strip_size_kb": 0, 00:23:53.728 "state": "configuring", 00:23:53.728 "raid_level": "raid1", 00:23:53.728 "superblock": true, 00:23:53.728 "num_base_bdevs": 2, 00:23:53.728 "num_base_bdevs_discovered": 1, 00:23:53.728 "num_base_bdevs_operational": 2, 00:23:53.728 "base_bdevs_list": [ 00:23:53.728 { 00:23:53.728 "name": "BaseBdev1", 00:23:53.728 "uuid": "a5511a30-b40c-41ff-8249-ae0e11dafa82", 00:23:53.728 "is_configured": true, 00:23:53.728 "data_offset": 256, 00:23:53.728 "data_size": 7936 00:23:53.728 }, 00:23:53.728 { 00:23:53.728 "name": "BaseBdev2", 00:23:53.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.728 "is_configured": false, 00:23:53.728 "data_offset": 0, 00:23:53.728 "data_size": 0 00:23:53.728 } 00:23:53.728 ] 00:23:53.728 }' 00:23:53.728 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.728 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:53.987 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:23:53.987 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.987 13:18:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:53.987 [2024-12-06 13:18:40.999460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:53.987 [2024-12-06 13:18:40.999919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:53.987 [2024-12-06 13:18:40.999941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:53.987 BaseBdev2 00:23:53.987 [2024-12-06 13:18:41.000312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:53.987 [2024-12-06 13:18:41.000616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:53.987 [2024-12-06 13:18:41.000641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:53.987 [2024-12-06 13:18:41.000817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:53.987 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.246 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:54.246 [ 00:23:54.246 { 00:23:54.246 "name": "BaseBdev2", 00:23:54.246 "aliases": [ 00:23:54.246 "c504512a-6974-4fae-9ca9-dec0069f740c" 00:23:54.246 ], 00:23:54.246 "product_name": "Malloc disk", 00:23:54.246 "block_size": 4096, 00:23:54.246 "num_blocks": 8192, 00:23:54.246 "uuid": "c504512a-6974-4fae-9ca9-dec0069f740c", 00:23:54.246 "assigned_rate_limits": { 00:23:54.246 "rw_ios_per_sec": 0, 00:23:54.247 "rw_mbytes_per_sec": 0, 00:23:54.247 "r_mbytes_per_sec": 0, 00:23:54.247 "w_mbytes_per_sec": 0 00:23:54.247 }, 00:23:54.247 "claimed": true, 00:23:54.247 "claim_type": "exclusive_write", 00:23:54.247 "zoned": false, 00:23:54.247 "supported_io_types": { 00:23:54.247 "read": true, 00:23:54.247 "write": true, 00:23:54.247 "unmap": true, 00:23:54.247 "flush": true, 00:23:54.247 "reset": true, 00:23:54.247 "nvme_admin": false, 00:23:54.247 "nvme_io": false, 00:23:54.247 "nvme_io_md": false, 00:23:54.247 "write_zeroes": true, 00:23:54.247 "zcopy": true, 00:23:54.247 "get_zone_info": false, 00:23:54.247 "zone_management": false, 00:23:54.247 "zone_append": false, 00:23:54.247 "compare": false, 00:23:54.247 "compare_and_write": false, 00:23:54.247 "abort": true, 00:23:54.247 "seek_hole": false, 00:23:54.247 "seek_data": false, 00:23:54.247 "copy": true, 00:23:54.247 "nvme_iov_md": false 00:23:54.247 }, 00:23:54.247 "memory_domains": [ 00:23:54.247 { 00:23:54.247 "dma_device_id": "system", 00:23:54.247 "dma_device_type": 1 00:23:54.247 }, 00:23:54.247 { 00:23:54.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.247 "dma_device_type": 2 00:23:54.247 } 00:23:54.247 ], 00:23:54.247 "driver_specific": {} 00:23:54.247 } 00:23:54.247 ] 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:54.247 "name": "Existed_Raid", 00:23:54.247 "uuid": "19cef804-c7d2-42c1-912c-ac07afa21b17", 00:23:54.247 "strip_size_kb": 0, 00:23:54.247 "state": "online", 00:23:54.247 "raid_level": "raid1", 00:23:54.247 "superblock": true, 00:23:54.247 "num_base_bdevs": 2, 00:23:54.247 "num_base_bdevs_discovered": 2, 00:23:54.247 "num_base_bdevs_operational": 2, 00:23:54.247 "base_bdevs_list": [ 00:23:54.247 { 00:23:54.247 "name": "BaseBdev1", 00:23:54.247 "uuid": "a5511a30-b40c-41ff-8249-ae0e11dafa82", 00:23:54.247 "is_configured": true, 00:23:54.247 "data_offset": 256, 00:23:54.247 "data_size": 7936 00:23:54.247 }, 00:23:54.247 { 00:23:54.247 "name": "BaseBdev2", 00:23:54.247 "uuid": "c504512a-6974-4fae-9ca9-dec0069f740c", 00:23:54.247 "is_configured": true, 00:23:54.247 "data_offset": 256, 00:23:54.247 "data_size": 7936 00:23:54.247 } 00:23:54.247 ] 00:23:54.247 }' 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:54.247 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:54.815 [2024-12-06 13:18:41.576143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:54.815 "name": "Existed_Raid", 00:23:54.815 "aliases": [ 00:23:54.815 "19cef804-c7d2-42c1-912c-ac07afa21b17" 00:23:54.815 ], 00:23:54.815 "product_name": "Raid Volume", 00:23:54.815 "block_size": 4096, 00:23:54.815 "num_blocks": 7936, 00:23:54.815 "uuid": "19cef804-c7d2-42c1-912c-ac07afa21b17", 00:23:54.815 "assigned_rate_limits": { 00:23:54.815 "rw_ios_per_sec": 0, 00:23:54.815 "rw_mbytes_per_sec": 0, 00:23:54.815 "r_mbytes_per_sec": 0, 00:23:54.815 "w_mbytes_per_sec": 0 00:23:54.815 }, 00:23:54.815 "claimed": false, 00:23:54.815 "zoned": false, 00:23:54.815 "supported_io_types": { 00:23:54.815 "read": true, 00:23:54.815 "write": true, 00:23:54.815 "unmap": false, 00:23:54.815 "flush": false, 00:23:54.815 "reset": true, 00:23:54.815 "nvme_admin": false, 00:23:54.815 "nvme_io": false, 00:23:54.815 "nvme_io_md": false, 00:23:54.815 "write_zeroes": true, 00:23:54.815 "zcopy": false, 00:23:54.815 "get_zone_info": false, 00:23:54.815 "zone_management": false, 00:23:54.815 "zone_append": false, 00:23:54.815 "compare": false, 00:23:54.815 "compare_and_write": false, 00:23:54.815 "abort": false, 00:23:54.815 "seek_hole": false, 00:23:54.815 "seek_data": false, 00:23:54.815 "copy": false, 00:23:54.815 "nvme_iov_md": false 00:23:54.815 }, 00:23:54.815 "memory_domains": [ 00:23:54.815 { 00:23:54.815 "dma_device_id": "system", 00:23:54.815 "dma_device_type": 1 00:23:54.815 }, 00:23:54.815 { 00:23:54.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.815 "dma_device_type": 2 00:23:54.815 }, 00:23:54.815 { 00:23:54.815 "dma_device_id": "system", 00:23:54.815 "dma_device_type": 1 00:23:54.815 }, 00:23:54.815 { 00:23:54.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.815 "dma_device_type": 2 00:23:54.815 } 00:23:54.815 ], 00:23:54.815 "driver_specific": { 00:23:54.815 "raid": { 00:23:54.815 "uuid": "19cef804-c7d2-42c1-912c-ac07afa21b17", 00:23:54.815 "strip_size_kb": 0, 00:23:54.815 "state": "online", 00:23:54.815 "raid_level": "raid1", 00:23:54.815 "superblock": true, 00:23:54.815 "num_base_bdevs": 2, 00:23:54.815 "num_base_bdevs_discovered": 2, 00:23:54.815 "num_base_bdevs_operational": 2, 00:23:54.815 "base_bdevs_list": [ 00:23:54.815 { 00:23:54.815 "name": "BaseBdev1", 00:23:54.815 "uuid": "a5511a30-b40c-41ff-8249-ae0e11dafa82", 00:23:54.815 "is_configured": true, 00:23:54.815 "data_offset": 256, 00:23:54.815 "data_size": 7936 00:23:54.815 }, 00:23:54.815 { 00:23:54.815 "name": "BaseBdev2", 00:23:54.815 "uuid": "c504512a-6974-4fae-9ca9-dec0069f740c", 00:23:54.815 "is_configured": true, 00:23:54.815 "data_offset": 256, 00:23:54.815 "data_size": 7936 00:23:54.815 } 00:23:54.815 ] 00:23:54.815 } 00:23:54.815 } 00:23:54.815 }' 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:54.815 BaseBdev2' 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.815 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:55.075 [2024-12-06 13:18:41.875931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:55.075 13:18:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.075 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:55.075 "name": "Existed_Raid", 00:23:55.075 "uuid": "19cef804-c7d2-42c1-912c-ac07afa21b17", 00:23:55.075 "strip_size_kb": 0, 00:23:55.075 "state": "online", 00:23:55.075 "raid_level": "raid1", 00:23:55.075 "superblock": true, 00:23:55.075 "num_base_bdevs": 2, 00:23:55.075 "num_base_bdevs_discovered": 1, 00:23:55.075 "num_base_bdevs_operational": 1, 00:23:55.075 "base_bdevs_list": [ 00:23:55.075 { 00:23:55.075 "name": null, 00:23:55.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.075 "is_configured": false, 00:23:55.075 "data_offset": 0, 00:23:55.075 "data_size": 7936 00:23:55.075 }, 00:23:55.075 { 00:23:55.075 "name": "BaseBdev2", 00:23:55.075 "uuid": "c504512a-6974-4fae-9ca9-dec0069f740c", 00:23:55.075 "is_configured": true, 00:23:55.075 "data_offset": 256, 00:23:55.075 "data_size": 7936 00:23:55.075 } 00:23:55.075 ] 00:23:55.075 }' 00:23:55.075 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:55.075 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:55.643 [2024-12-06 13:18:42.556856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:55.643 [2024-12-06 13:18:42.557016] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:55.643 [2024-12-06 13:18:42.649826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:55.643 [2024-12-06 13:18:42.649900] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:55.643 [2024-12-06 13:18:42.649924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:55.643 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86730 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86730 ']' 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86730 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86730 00:23:55.902 killing process with pid 86730 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86730' 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86730 00:23:55.902 [2024-12-06 13:18:42.740134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:55.902 13:18:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86730 00:23:55.902 [2024-12-06 13:18:42.757436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:57.277 13:18:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:23:57.277 00:23:57.277 real 0m5.789s 00:23:57.277 user 0m8.627s 00:23:57.277 sys 0m0.898s 00:23:57.277 ************************************ 00:23:57.277 END TEST raid_state_function_test_sb_4k 00:23:57.277 ************************************ 00:23:57.277 13:18:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.277 13:18:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.277 13:18:43 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:23:57.277 13:18:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:57.277 13:18:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.277 13:18:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:57.277 ************************************ 00:23:57.277 START TEST raid_superblock_test_4k 00:23:57.277 ************************************ 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=87001 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 87001 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 87001 ']' 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.277 13:18:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:57.277 [2024-12-06 13:18:44.083096] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:23:57.277 [2024-12-06 13:18:44.083667] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87001 ] 00:23:57.277 [2024-12-06 13:18:44.259118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.535 [2024-12-06 13:18:44.412040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.793 [2024-12-06 13:18:44.645627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:57.794 [2024-12-06 13:18:44.645708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.359 malloc1 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.359 [2024-12-06 13:18:45.183748] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:58.359 [2024-12-06 13:18:45.184068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.359 [2024-12-06 13:18:45.184236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:58.359 [2024-12-06 13:18:45.184403] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.359 [2024-12-06 13:18:45.187818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.359 [2024-12-06 13:18:45.188027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:58.359 pt1 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.359 malloc2 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.359 [2024-12-06 13:18:45.246101] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:58.359 [2024-12-06 13:18:45.246215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.359 [2024-12-06 13:18:45.246252] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:58.359 [2024-12-06 13:18:45.246267] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.359 [2024-12-06 13:18:45.249457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.359 [2024-12-06 13:18:45.249545] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:58.359 pt2 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.359 [2024-12-06 13:18:45.258248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:58.359 [2024-12-06 13:18:45.261118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:58.359 [2024-12-06 13:18:45.261398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:58.359 [2024-12-06 13:18:45.261422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:58.359 [2024-12-06 13:18:45.261813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:58.359 [2024-12-06 13:18:45.262103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:58.359 [2024-12-06 13:18:45.262138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:58.359 [2024-12-06 13:18:45.262387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.359 "name": "raid_bdev1", 00:23:58.359 "uuid": "d3517305-32a2-4ae5-a4c2-1a202defa9b6", 00:23:58.359 "strip_size_kb": 0, 00:23:58.359 "state": "online", 00:23:58.359 "raid_level": "raid1", 00:23:58.359 "superblock": true, 00:23:58.359 "num_base_bdevs": 2, 00:23:58.359 "num_base_bdevs_discovered": 2, 00:23:58.359 "num_base_bdevs_operational": 2, 00:23:58.359 "base_bdevs_list": [ 00:23:58.359 { 00:23:58.359 "name": "pt1", 00:23:58.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:58.359 "is_configured": true, 00:23:58.359 "data_offset": 256, 00:23:58.359 "data_size": 7936 00:23:58.359 }, 00:23:58.359 { 00:23:58.359 "name": "pt2", 00:23:58.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:58.359 "is_configured": true, 00:23:58.359 "data_offset": 256, 00:23:58.359 "data_size": 7936 00:23:58.359 } 00:23:58.359 ] 00:23:58.359 }' 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.359 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:58.923 [2024-12-06 13:18:45.835116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.923 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:58.923 "name": "raid_bdev1", 00:23:58.923 "aliases": [ 00:23:58.923 "d3517305-32a2-4ae5-a4c2-1a202defa9b6" 00:23:58.923 ], 00:23:58.923 "product_name": "Raid Volume", 00:23:58.923 "block_size": 4096, 00:23:58.923 "num_blocks": 7936, 00:23:58.923 "uuid": "d3517305-32a2-4ae5-a4c2-1a202defa9b6", 00:23:58.923 "assigned_rate_limits": { 00:23:58.923 "rw_ios_per_sec": 0, 00:23:58.923 "rw_mbytes_per_sec": 0, 00:23:58.923 "r_mbytes_per_sec": 0, 00:23:58.923 "w_mbytes_per_sec": 0 00:23:58.923 }, 00:23:58.923 "claimed": false, 00:23:58.923 "zoned": false, 00:23:58.923 "supported_io_types": { 00:23:58.923 "read": true, 00:23:58.923 "write": true, 00:23:58.923 "unmap": false, 00:23:58.923 "flush": false, 00:23:58.923 "reset": true, 00:23:58.923 "nvme_admin": false, 00:23:58.923 "nvme_io": false, 00:23:58.923 "nvme_io_md": false, 00:23:58.923 "write_zeroes": true, 00:23:58.923 "zcopy": false, 00:23:58.923 "get_zone_info": false, 00:23:58.923 "zone_management": false, 00:23:58.923 "zone_append": false, 00:23:58.923 "compare": false, 00:23:58.923 "compare_and_write": false, 00:23:58.923 "abort": false, 00:23:58.923 "seek_hole": false, 00:23:58.923 "seek_data": false, 00:23:58.923 "copy": false, 00:23:58.923 "nvme_iov_md": false 00:23:58.923 }, 00:23:58.923 "memory_domains": [ 00:23:58.923 { 00:23:58.923 "dma_device_id": "system", 00:23:58.923 "dma_device_type": 1 00:23:58.923 }, 00:23:58.923 { 00:23:58.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.923 "dma_device_type": 2 00:23:58.923 }, 00:23:58.923 { 00:23:58.923 "dma_device_id": "system", 00:23:58.923 "dma_device_type": 1 00:23:58.923 }, 00:23:58.923 { 00:23:58.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.923 "dma_device_type": 2 00:23:58.923 } 00:23:58.923 ], 00:23:58.923 "driver_specific": { 00:23:58.923 "raid": { 00:23:58.923 "uuid": "d3517305-32a2-4ae5-a4c2-1a202defa9b6", 00:23:58.923 "strip_size_kb": 0, 00:23:58.923 "state": "online", 00:23:58.923 "raid_level": "raid1", 00:23:58.923 "superblock": true, 00:23:58.923 "num_base_bdevs": 2, 00:23:58.923 "num_base_bdevs_discovered": 2, 00:23:58.923 "num_base_bdevs_operational": 2, 00:23:58.923 "base_bdevs_list": [ 00:23:58.923 { 00:23:58.923 "name": "pt1", 00:23:58.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:58.923 "is_configured": true, 00:23:58.923 "data_offset": 256, 00:23:58.923 "data_size": 7936 00:23:58.923 }, 00:23:58.923 { 00:23:58.924 "name": "pt2", 00:23:58.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:58.924 "is_configured": true, 00:23:58.924 "data_offset": 256, 00:23:58.924 "data_size": 7936 00:23:58.924 } 00:23:58.924 ] 00:23:58.924 } 00:23:58.924 } 00:23:58.924 }' 00:23:58.924 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:59.182 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:59.182 pt2' 00:23:59.182 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:59.182 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:23:59.182 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:59.182 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:59.182 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.182 13:18:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.182 13:18:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:59.182 [2024-12-06 13:18:46.103125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d3517305-32a2-4ae5-a4c2-1a202defa9b6 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z d3517305-32a2-4ae5-a4c2-1a202defa9b6 ']' 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.182 [2024-12-06 13:18:46.158726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:59.182 [2024-12-06 13:18:46.158977] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.182 [2024-12-06 13:18:46.159270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.182 [2024-12-06 13:18:46.159482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.182 [2024-12-06 13:18:46.159647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.182 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.441 [2024-12-06 13:18:46.306824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:59.441 [2024-12-06 13:18:46.309788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:59.441 [2024-12-06 13:18:46.309899] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:59.441 [2024-12-06 13:18:46.310026] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:59.441 [2024-12-06 13:18:46.310054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:59.441 [2024-12-06 13:18:46.310071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:59.441 request: 00:23:59.441 { 00:23:59.441 "name": "raid_bdev1", 00:23:59.441 "raid_level": "raid1", 00:23:59.441 "base_bdevs": [ 00:23:59.441 "malloc1", 00:23:59.441 "malloc2" 00:23:59.441 ], 00:23:59.441 "superblock": false, 00:23:59.441 "method": "bdev_raid_create", 00:23:59.441 "req_id": 1 00:23:59.441 } 00:23:59.441 Got JSON-RPC error response 00:23:59.441 response: 00:23:59.441 { 00:23:59.441 "code": -17, 00:23:59.441 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:59.441 } 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.441 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.441 [2024-12-06 13:18:46.375018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:59.441 [2024-12-06 13:18:46.375086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.441 [2024-12-06 13:18:46.375119] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:59.441 [2024-12-06 13:18:46.375144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.441 [2024-12-06 13:18:46.378433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.442 [2024-12-06 13:18:46.378525] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:59.442 [2024-12-06 13:18:46.378624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:59.442 [2024-12-06 13:18:46.378705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:59.442 pt1 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.442 "name": "raid_bdev1", 00:23:59.442 "uuid": "d3517305-32a2-4ae5-a4c2-1a202defa9b6", 00:23:59.442 "strip_size_kb": 0, 00:23:59.442 "state": "configuring", 00:23:59.442 "raid_level": "raid1", 00:23:59.442 "superblock": true, 00:23:59.442 "num_base_bdevs": 2, 00:23:59.442 "num_base_bdevs_discovered": 1, 00:23:59.442 "num_base_bdevs_operational": 2, 00:23:59.442 "base_bdevs_list": [ 00:23:59.442 { 00:23:59.442 "name": "pt1", 00:23:59.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:59.442 "is_configured": true, 00:23:59.442 "data_offset": 256, 00:23:59.442 "data_size": 7936 00:23:59.442 }, 00:23:59.442 { 00:23:59.442 "name": null, 00:23:59.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.442 "is_configured": false, 00:23:59.442 "data_offset": 256, 00:23:59.442 "data_size": 7936 00:23:59.442 } 00:23:59.442 ] 00:23:59.442 }' 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.442 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:00.008 [2024-12-06 13:18:46.919279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:00.008 [2024-12-06 13:18:46.919701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.008 [2024-12-06 13:18:46.919756] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:00.008 [2024-12-06 13:18:46.919778] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.008 [2024-12-06 13:18:46.920555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.008 [2024-12-06 13:18:46.920607] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:00.008 [2024-12-06 13:18:46.920763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:00.008 [2024-12-06 13:18:46.920816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:00.008 [2024-12-06 13:18:46.921014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:00.008 [2024-12-06 13:18:46.921042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:00.008 [2024-12-06 13:18:46.921380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:00.008 [2024-12-06 13:18:46.921635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:00.008 [2024-12-06 13:18:46.921652] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:00.008 [2024-12-06 13:18:46.921839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.008 pt2 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:00.008 "name": "raid_bdev1", 00:24:00.008 "uuid": "d3517305-32a2-4ae5-a4c2-1a202defa9b6", 00:24:00.008 "strip_size_kb": 0, 00:24:00.008 "state": "online", 00:24:00.008 "raid_level": "raid1", 00:24:00.008 "superblock": true, 00:24:00.008 "num_base_bdevs": 2, 00:24:00.008 "num_base_bdevs_discovered": 2, 00:24:00.008 "num_base_bdevs_operational": 2, 00:24:00.008 "base_bdevs_list": [ 00:24:00.008 { 00:24:00.008 "name": "pt1", 00:24:00.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:00.008 "is_configured": true, 00:24:00.008 "data_offset": 256, 00:24:00.008 "data_size": 7936 00:24:00.008 }, 00:24:00.008 { 00:24:00.008 "name": "pt2", 00:24:00.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.008 "is_configured": true, 00:24:00.008 "data_offset": 256, 00:24:00.008 "data_size": 7936 00:24:00.008 } 00:24:00.008 ] 00:24:00.008 }' 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:00.008 13:18:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:00.574 [2024-12-06 13:18:47.455757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:00.574 "name": "raid_bdev1", 00:24:00.574 "aliases": [ 00:24:00.574 "d3517305-32a2-4ae5-a4c2-1a202defa9b6" 00:24:00.574 ], 00:24:00.574 "product_name": "Raid Volume", 00:24:00.574 "block_size": 4096, 00:24:00.574 "num_blocks": 7936, 00:24:00.574 "uuid": "d3517305-32a2-4ae5-a4c2-1a202defa9b6", 00:24:00.574 "assigned_rate_limits": { 00:24:00.574 "rw_ios_per_sec": 0, 00:24:00.574 "rw_mbytes_per_sec": 0, 00:24:00.574 "r_mbytes_per_sec": 0, 00:24:00.574 "w_mbytes_per_sec": 0 00:24:00.574 }, 00:24:00.574 "claimed": false, 00:24:00.574 "zoned": false, 00:24:00.574 "supported_io_types": { 00:24:00.574 "read": true, 00:24:00.574 "write": true, 00:24:00.574 "unmap": false, 00:24:00.574 "flush": false, 00:24:00.574 "reset": true, 00:24:00.574 "nvme_admin": false, 00:24:00.574 "nvme_io": false, 00:24:00.574 "nvme_io_md": false, 00:24:00.574 "write_zeroes": true, 00:24:00.574 "zcopy": false, 00:24:00.574 "get_zone_info": false, 00:24:00.574 "zone_management": false, 00:24:00.574 "zone_append": false, 00:24:00.574 "compare": false, 00:24:00.574 "compare_and_write": false, 00:24:00.574 "abort": false, 00:24:00.574 "seek_hole": false, 00:24:00.574 "seek_data": false, 00:24:00.574 "copy": false, 00:24:00.574 "nvme_iov_md": false 00:24:00.574 }, 00:24:00.574 "memory_domains": [ 00:24:00.574 { 00:24:00.574 "dma_device_id": "system", 00:24:00.574 "dma_device_type": 1 00:24:00.574 }, 00:24:00.574 { 00:24:00.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.574 "dma_device_type": 2 00:24:00.574 }, 00:24:00.574 { 00:24:00.574 "dma_device_id": "system", 00:24:00.574 "dma_device_type": 1 00:24:00.574 }, 00:24:00.574 { 00:24:00.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.574 "dma_device_type": 2 00:24:00.574 } 00:24:00.574 ], 00:24:00.574 "driver_specific": { 00:24:00.574 "raid": { 00:24:00.574 "uuid": "d3517305-32a2-4ae5-a4c2-1a202defa9b6", 00:24:00.574 "strip_size_kb": 0, 00:24:00.574 "state": "online", 00:24:00.574 "raid_level": "raid1", 00:24:00.574 "superblock": true, 00:24:00.574 "num_base_bdevs": 2, 00:24:00.574 "num_base_bdevs_discovered": 2, 00:24:00.574 "num_base_bdevs_operational": 2, 00:24:00.574 "base_bdevs_list": [ 00:24:00.574 { 00:24:00.574 "name": "pt1", 00:24:00.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:00.574 "is_configured": true, 00:24:00.574 "data_offset": 256, 00:24:00.574 "data_size": 7936 00:24:00.574 }, 00:24:00.574 { 00:24:00.574 "name": "pt2", 00:24:00.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.574 "is_configured": true, 00:24:00.574 "data_offset": 256, 00:24:00.574 "data_size": 7936 00:24:00.574 } 00:24:00.574 ] 00:24:00.574 } 00:24:00.574 } 00:24:00.574 }' 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:00.574 pt2' 00:24:00.574 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:00.833 [2024-12-06 13:18:47.731729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' d3517305-32a2-4ae5-a4c2-1a202defa9b6 '!=' d3517305-32a2-4ae5-a4c2-1a202defa9b6 ']' 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:00.833 [2024-12-06 13:18:47.783527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:00.833 "name": "raid_bdev1", 00:24:00.833 "uuid": "d3517305-32a2-4ae5-a4c2-1a202defa9b6", 00:24:00.833 "strip_size_kb": 0, 00:24:00.833 "state": "online", 00:24:00.833 "raid_level": "raid1", 00:24:00.833 "superblock": true, 00:24:00.833 "num_base_bdevs": 2, 00:24:00.833 "num_base_bdevs_discovered": 1, 00:24:00.833 "num_base_bdevs_operational": 1, 00:24:00.833 "base_bdevs_list": [ 00:24:00.833 { 00:24:00.833 "name": null, 00:24:00.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.833 "is_configured": false, 00:24:00.833 "data_offset": 0, 00:24:00.833 "data_size": 7936 00:24:00.833 }, 00:24:00.833 { 00:24:00.833 "name": "pt2", 00:24:00.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.833 "is_configured": true, 00:24:00.833 "data_offset": 256, 00:24:00.833 "data_size": 7936 00:24:00.833 } 00:24:00.833 ] 00:24:00.833 }' 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:00.833 13:18:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:01.402 [2024-12-06 13:18:48.323650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:01.402 [2024-12-06 13:18:48.323696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:01.402 [2024-12-06 13:18:48.323813] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:01.402 [2024-12-06 13:18:48.323899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:01.402 [2024-12-06 13:18:48.323920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:01.402 [2024-12-06 13:18:48.395662] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:01.402 [2024-12-06 13:18:48.395918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.402 [2024-12-06 13:18:48.395957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:01.402 [2024-12-06 13:18:48.395977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.402 [2024-12-06 13:18:48.399387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.402 [2024-12-06 13:18:48.399594] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:01.402 [2024-12-06 13:18:48.399720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:01.402 [2024-12-06 13:18:48.399795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:01.402 [2024-12-06 13:18:48.399977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:01.402 [2024-12-06 13:18:48.400001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:01.402 pt2 00:24:01.402 [2024-12-06 13:18:48.400347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:01.402 [2024-12-06 13:18:48.400583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:01.402 [2024-12-06 13:18:48.400608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:01.402 [2024-12-06 13:18:48.400792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:01.402 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.683 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.683 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:01.683 "name": "raid_bdev1", 00:24:01.683 "uuid": "d3517305-32a2-4ae5-a4c2-1a202defa9b6", 00:24:01.683 "strip_size_kb": 0, 00:24:01.683 "state": "online", 00:24:01.683 "raid_level": "raid1", 00:24:01.683 "superblock": true, 00:24:01.683 "num_base_bdevs": 2, 00:24:01.683 "num_base_bdevs_discovered": 1, 00:24:01.683 "num_base_bdevs_operational": 1, 00:24:01.683 "base_bdevs_list": [ 00:24:01.683 { 00:24:01.683 "name": null, 00:24:01.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.683 "is_configured": false, 00:24:01.683 "data_offset": 256, 00:24:01.683 "data_size": 7936 00:24:01.683 }, 00:24:01.683 { 00:24:01.683 "name": "pt2", 00:24:01.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:01.683 "is_configured": true, 00:24:01.683 "data_offset": 256, 00:24:01.683 "data_size": 7936 00:24:01.683 } 00:24:01.683 ] 00:24:01.683 }' 00:24:01.683 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:01.683 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:01.942 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:01.942 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.942 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:01.942 [2024-12-06 13:18:48.944068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:01.942 [2024-12-06 13:18:48.944131] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:01.942 [2024-12-06 13:18:48.944272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:01.942 [2024-12-06 13:18:48.944409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:01.942 [2024-12-06 13:18:48.944426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:01.942 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.942 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.942 13:18:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:01.942 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.942 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.201 13:18:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.201 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:02.201 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:02.201 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:02.201 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:02.201 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.201 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.201 [2024-12-06 13:18:49.008050] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:02.201 [2024-12-06 13:18:49.008331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.201 [2024-12-06 13:18:49.008391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:02.201 [2024-12-06 13:18:49.008412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.201 [2024-12-06 13:18:49.012043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.201 [2024-12-06 13:18:49.012092] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:02.201 [2024-12-06 13:18:49.012248] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:02.201 [2024-12-06 13:18:49.012341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:02.201 [2024-12-06 13:18:49.012627] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:02.201 [2024-12-06 13:18:49.012648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:02.201 [2024-12-06 13:18:49.012673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:02.201 [2024-12-06 13:18:49.012750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:02.201 pt1 00:24:02.201 [2024-12-06 13:18:49.012881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:02.201 [2024-12-06 13:18:49.012898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:02.201 [2024-12-06 13:18:49.013248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:02.201 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.202 [2024-12-06 13:18:49.013532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:02.202 [2024-12-06 13:18:49.013559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:02.202 [2024-12-06 13:18:49.013753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:02.202 "name": "raid_bdev1", 00:24:02.202 "uuid": "d3517305-32a2-4ae5-a4c2-1a202defa9b6", 00:24:02.202 "strip_size_kb": 0, 00:24:02.202 "state": "online", 00:24:02.202 "raid_level": "raid1", 00:24:02.202 "superblock": true, 00:24:02.202 "num_base_bdevs": 2, 00:24:02.202 "num_base_bdevs_discovered": 1, 00:24:02.202 "num_base_bdevs_operational": 1, 00:24:02.202 "base_bdevs_list": [ 00:24:02.202 { 00:24:02.202 "name": null, 00:24:02.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.202 "is_configured": false, 00:24:02.202 "data_offset": 256, 00:24:02.202 "data_size": 7936 00:24:02.202 }, 00:24:02.202 { 00:24:02.202 "name": "pt2", 00:24:02.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:02.202 "is_configured": true, 00:24:02.202 "data_offset": 256, 00:24:02.202 "data_size": 7936 00:24:02.202 } 00:24:02.202 ] 00:24:02.202 }' 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:02.202 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:02.770 [2024-12-06 13:18:49.602308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' d3517305-32a2-4ae5-a4c2-1a202defa9b6 '!=' d3517305-32a2-4ae5-a4c2-1a202defa9b6 ']' 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 87001 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 87001 ']' 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 87001 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87001 00:24:02.770 killing process with pid 87001 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87001' 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 87001 00:24:02.770 [2024-12-06 13:18:49.681662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:02.770 13:18:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 87001 00:24:02.770 [2024-12-06 13:18:49.681952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:02.770 [2024-12-06 13:18:49.682098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:02.770 [2024-12-06 13:18:49.682137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:03.029 [2024-12-06 13:18:49.884197] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:04.406 13:18:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:24:04.406 00:24:04.406 real 0m7.056s 00:24:04.406 user 0m11.078s 00:24:04.406 sys 0m1.068s 00:24:04.406 13:18:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.406 13:18:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:24:04.406 ************************************ 00:24:04.406 END TEST raid_superblock_test_4k 00:24:04.406 ************************************ 00:24:04.406 13:18:51 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:24:04.406 13:18:51 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:24:04.406 13:18:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:04.406 13:18:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.406 13:18:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:04.406 ************************************ 00:24:04.406 START TEST raid_rebuild_test_sb_4k 00:24:04.406 ************************************ 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87335 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87335 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87335 ']' 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.406 13:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:04.406 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:04.406 Zero copy mechanism will not be used. 00:24:04.406 [2024-12-06 13:18:51.208192] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:24:04.406 [2024-12-06 13:18:51.208379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87335 ] 00:24:04.406 [2024-12-06 13:18:51.384867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.665 [2024-12-06 13:18:51.534783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.923 [2024-12-06 13:18:51.753167] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:04.923 [2024-12-06 13:18:51.753262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:05.182 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.182 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:24:05.182 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:05.182 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:24:05.182 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.182 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.442 BaseBdev1_malloc 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.442 [2024-12-06 13:18:52.238990] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:05.442 [2024-12-06 13:18:52.239100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.442 [2024-12-06 13:18:52.239150] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:05.442 [2024-12-06 13:18:52.239181] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.442 [2024-12-06 13:18:52.242258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.442 [2024-12-06 13:18:52.242322] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:05.442 BaseBdev1 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.442 BaseBdev2_malloc 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:05.442 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.443 [2024-12-06 13:18:52.296351] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:05.443 [2024-12-06 13:18:52.296445] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.443 [2024-12-06 13:18:52.296475] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:05.443 [2024-12-06 13:18:52.296517] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.443 [2024-12-06 13:18:52.299495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.443 [2024-12-06 13:18:52.299564] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:05.443 BaseBdev2 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.443 spare_malloc 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.443 spare_delay 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.443 [2024-12-06 13:18:52.371466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:05.443 [2024-12-06 13:18:52.371597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.443 [2024-12-06 13:18:52.371628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:05.443 [2024-12-06 13:18:52.371646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.443 [2024-12-06 13:18:52.374578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.443 [2024-12-06 13:18:52.374637] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:05.443 spare 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.443 [2024-12-06 13:18:52.379569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:05.443 [2024-12-06 13:18:52.382106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:05.443 [2024-12-06 13:18:52.382443] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:05.443 [2024-12-06 13:18:52.382496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:05.443 [2024-12-06 13:18:52.382818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:05.443 [2024-12-06 13:18:52.383121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:05.443 [2024-12-06 13:18:52.383152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:05.443 [2024-12-06 13:18:52.383351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.443 "name": "raid_bdev1", 00:24:05.443 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:05.443 "strip_size_kb": 0, 00:24:05.443 "state": "online", 00:24:05.443 "raid_level": "raid1", 00:24:05.443 "superblock": true, 00:24:05.443 "num_base_bdevs": 2, 00:24:05.443 "num_base_bdevs_discovered": 2, 00:24:05.443 "num_base_bdevs_operational": 2, 00:24:05.443 "base_bdevs_list": [ 00:24:05.443 { 00:24:05.443 "name": "BaseBdev1", 00:24:05.443 "uuid": "4c2e3e63-4135-5572-a9bb-a164f1fd4983", 00:24:05.443 "is_configured": true, 00:24:05.443 "data_offset": 256, 00:24:05.443 "data_size": 7936 00:24:05.443 }, 00:24:05.443 { 00:24:05.443 "name": "BaseBdev2", 00:24:05.443 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:05.443 "is_configured": true, 00:24:05.443 "data_offset": 256, 00:24:05.443 "data_size": 7936 00:24:05.443 } 00:24:05.443 ] 00:24:05.443 }' 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.443 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:06.010 [2024-12-06 13:18:52.844157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:06.010 13:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:06.268 [2024-12-06 13:18:53.168025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:06.268 /dev/nbd0 00:24:06.268 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:06.268 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:06.268 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:06.268 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:24:06.268 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:06.268 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:06.268 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:06.269 1+0 records in 00:24:06.269 1+0 records out 00:24:06.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372204 s, 11.0 MB/s 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:06.269 13:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:07.204 7936+0 records in 00:24:07.204 7936+0 records out 00:24:07.204 32505856 bytes (33 MB, 31 MiB) copied, 0.935801 s, 34.7 MB/s 00:24:07.204 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:07.204 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:07.204 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:07.204 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:07.204 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:07.204 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:07.204 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:07.771 [2024-12-06 13:18:54.484716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:07.771 [2024-12-06 13:18:54.501443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.771 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:07.771 "name": "raid_bdev1", 00:24:07.771 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:07.771 "strip_size_kb": 0, 00:24:07.771 "state": "online", 00:24:07.771 "raid_level": "raid1", 00:24:07.771 "superblock": true, 00:24:07.772 "num_base_bdevs": 2, 00:24:07.772 "num_base_bdevs_discovered": 1, 00:24:07.772 "num_base_bdevs_operational": 1, 00:24:07.772 "base_bdevs_list": [ 00:24:07.772 { 00:24:07.772 "name": null, 00:24:07.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.772 "is_configured": false, 00:24:07.772 "data_offset": 0, 00:24:07.772 "data_size": 7936 00:24:07.772 }, 00:24:07.772 { 00:24:07.772 "name": "BaseBdev2", 00:24:07.772 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:07.772 "is_configured": true, 00:24:07.772 "data_offset": 256, 00:24:07.772 "data_size": 7936 00:24:07.772 } 00:24:07.772 ] 00:24:07.772 }' 00:24:07.772 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:07.772 13:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:08.031 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:08.031 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.031 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:08.031 [2024-12-06 13:18:55.013588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:08.031 [2024-12-06 13:18:55.030728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:24:08.031 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.031 13:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:08.031 [2024-12-06 13:18:55.033544] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.404 "name": "raid_bdev1", 00:24:09.404 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:09.404 "strip_size_kb": 0, 00:24:09.404 "state": "online", 00:24:09.404 "raid_level": "raid1", 00:24:09.404 "superblock": true, 00:24:09.404 "num_base_bdevs": 2, 00:24:09.404 "num_base_bdevs_discovered": 2, 00:24:09.404 "num_base_bdevs_operational": 2, 00:24:09.404 "process": { 00:24:09.404 "type": "rebuild", 00:24:09.404 "target": "spare", 00:24:09.404 "progress": { 00:24:09.404 "blocks": 2304, 00:24:09.404 "percent": 29 00:24:09.404 } 00:24:09.404 }, 00:24:09.404 "base_bdevs_list": [ 00:24:09.404 { 00:24:09.404 "name": "spare", 00:24:09.404 "uuid": "f1afcea1-cb01-544b-88e3-70834eb18046", 00:24:09.404 "is_configured": true, 00:24:09.404 "data_offset": 256, 00:24:09.404 "data_size": 7936 00:24:09.404 }, 00:24:09.404 { 00:24:09.404 "name": "BaseBdev2", 00:24:09.404 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:09.404 "is_configured": true, 00:24:09.404 "data_offset": 256, 00:24:09.404 "data_size": 7936 00:24:09.404 } 00:24:09.404 ] 00:24:09.404 }' 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.404 [2024-12-06 13:18:56.199637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:09.404 [2024-12-06 13:18:56.245756] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:09.404 [2024-12-06 13:18:56.245904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:09.404 [2024-12-06 13:18:56.245930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:09.404 [2024-12-06 13:18:56.245945] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:09.404 "name": "raid_bdev1", 00:24:09.404 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:09.404 "strip_size_kb": 0, 00:24:09.404 "state": "online", 00:24:09.404 "raid_level": "raid1", 00:24:09.404 "superblock": true, 00:24:09.404 "num_base_bdevs": 2, 00:24:09.404 "num_base_bdevs_discovered": 1, 00:24:09.404 "num_base_bdevs_operational": 1, 00:24:09.404 "base_bdevs_list": [ 00:24:09.404 { 00:24:09.404 "name": null, 00:24:09.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.404 "is_configured": false, 00:24:09.404 "data_offset": 0, 00:24:09.404 "data_size": 7936 00:24:09.404 }, 00:24:09.404 { 00:24:09.404 "name": "BaseBdev2", 00:24:09.404 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:09.404 "is_configured": true, 00:24:09.404 "data_offset": 256, 00:24:09.404 "data_size": 7936 00:24:09.404 } 00:24:09.404 ] 00:24:09.404 }' 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:09.404 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.969 "name": "raid_bdev1", 00:24:09.969 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:09.969 "strip_size_kb": 0, 00:24:09.969 "state": "online", 00:24:09.969 "raid_level": "raid1", 00:24:09.969 "superblock": true, 00:24:09.969 "num_base_bdevs": 2, 00:24:09.969 "num_base_bdevs_discovered": 1, 00:24:09.969 "num_base_bdevs_operational": 1, 00:24:09.969 "base_bdevs_list": [ 00:24:09.969 { 00:24:09.969 "name": null, 00:24:09.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.969 "is_configured": false, 00:24:09.969 "data_offset": 0, 00:24:09.969 "data_size": 7936 00:24:09.969 }, 00:24:09.969 { 00:24:09.969 "name": "BaseBdev2", 00:24:09.969 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:09.969 "is_configured": true, 00:24:09.969 "data_offset": 256, 00:24:09.969 "data_size": 7936 00:24:09.969 } 00:24:09.969 ] 00:24:09.969 }' 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.969 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:09.969 [2024-12-06 13:18:56.969781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:10.236 [2024-12-06 13:18:56.987777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:24:10.236 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.236 13:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:10.236 [2024-12-06 13:18:56.990925] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:11.186 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.186 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.186 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:11.186 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:11.186 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.186 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.186 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.186 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.186 13:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:11.186 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.186 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.186 "name": "raid_bdev1", 00:24:11.186 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:11.186 "strip_size_kb": 0, 00:24:11.186 "state": "online", 00:24:11.186 "raid_level": "raid1", 00:24:11.186 "superblock": true, 00:24:11.186 "num_base_bdevs": 2, 00:24:11.186 "num_base_bdevs_discovered": 2, 00:24:11.186 "num_base_bdevs_operational": 2, 00:24:11.186 "process": { 00:24:11.186 "type": "rebuild", 00:24:11.186 "target": "spare", 00:24:11.186 "progress": { 00:24:11.186 "blocks": 2560, 00:24:11.186 "percent": 32 00:24:11.186 } 00:24:11.186 }, 00:24:11.186 "base_bdevs_list": [ 00:24:11.186 { 00:24:11.186 "name": "spare", 00:24:11.186 "uuid": "f1afcea1-cb01-544b-88e3-70834eb18046", 00:24:11.186 "is_configured": true, 00:24:11.186 "data_offset": 256, 00:24:11.186 "data_size": 7936 00:24:11.186 }, 00:24:11.186 { 00:24:11.186 "name": "BaseBdev2", 00:24:11.187 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:11.187 "is_configured": true, 00:24:11.187 "data_offset": 256, 00:24:11.187 "data_size": 7936 00:24:11.187 } 00:24:11.187 ] 00:24:11.187 }' 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:11.187 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=749 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.187 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.446 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.446 "name": "raid_bdev1", 00:24:11.446 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:11.446 "strip_size_kb": 0, 00:24:11.446 "state": "online", 00:24:11.446 "raid_level": "raid1", 00:24:11.446 "superblock": true, 00:24:11.446 "num_base_bdevs": 2, 00:24:11.446 "num_base_bdevs_discovered": 2, 00:24:11.446 "num_base_bdevs_operational": 2, 00:24:11.446 "process": { 00:24:11.446 "type": "rebuild", 00:24:11.446 "target": "spare", 00:24:11.446 "progress": { 00:24:11.446 "blocks": 2816, 00:24:11.446 "percent": 35 00:24:11.446 } 00:24:11.446 }, 00:24:11.446 "base_bdevs_list": [ 00:24:11.446 { 00:24:11.446 "name": "spare", 00:24:11.446 "uuid": "f1afcea1-cb01-544b-88e3-70834eb18046", 00:24:11.446 "is_configured": true, 00:24:11.446 "data_offset": 256, 00:24:11.446 "data_size": 7936 00:24:11.446 }, 00:24:11.446 { 00:24:11.446 "name": "BaseBdev2", 00:24:11.446 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:11.446 "is_configured": true, 00:24:11.446 "data_offset": 256, 00:24:11.446 "data_size": 7936 00:24:11.446 } 00:24:11.446 ] 00:24:11.446 }' 00:24:11.446 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.446 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:11.446 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.446 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:11.446 13:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:12.382 "name": "raid_bdev1", 00:24:12.382 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:12.382 "strip_size_kb": 0, 00:24:12.382 "state": "online", 00:24:12.382 "raid_level": "raid1", 00:24:12.382 "superblock": true, 00:24:12.382 "num_base_bdevs": 2, 00:24:12.382 "num_base_bdevs_discovered": 2, 00:24:12.382 "num_base_bdevs_operational": 2, 00:24:12.382 "process": { 00:24:12.382 "type": "rebuild", 00:24:12.382 "target": "spare", 00:24:12.382 "progress": { 00:24:12.382 "blocks": 5888, 00:24:12.382 "percent": 74 00:24:12.382 } 00:24:12.382 }, 00:24:12.382 "base_bdevs_list": [ 00:24:12.382 { 00:24:12.382 "name": "spare", 00:24:12.382 "uuid": "f1afcea1-cb01-544b-88e3-70834eb18046", 00:24:12.382 "is_configured": true, 00:24:12.382 "data_offset": 256, 00:24:12.382 "data_size": 7936 00:24:12.382 }, 00:24:12.382 { 00:24:12.382 "name": "BaseBdev2", 00:24:12.382 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:12.382 "is_configured": true, 00:24:12.382 "data_offset": 256, 00:24:12.382 "data_size": 7936 00:24:12.382 } 00:24:12.382 ] 00:24:12.382 }' 00:24:12.382 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:12.640 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:12.640 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:12.640 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:12.640 13:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:13.205 [2024-12-06 13:19:00.120469] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:13.205 [2024-12-06 13:19:00.120620] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:13.205 [2024-12-06 13:19:00.120793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.770 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:13.770 "name": "raid_bdev1", 00:24:13.770 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:13.770 "strip_size_kb": 0, 00:24:13.770 "state": "online", 00:24:13.770 "raid_level": "raid1", 00:24:13.770 "superblock": true, 00:24:13.770 "num_base_bdevs": 2, 00:24:13.770 "num_base_bdevs_discovered": 2, 00:24:13.770 "num_base_bdevs_operational": 2, 00:24:13.770 "base_bdevs_list": [ 00:24:13.770 { 00:24:13.770 "name": "spare", 00:24:13.770 "uuid": "f1afcea1-cb01-544b-88e3-70834eb18046", 00:24:13.770 "is_configured": true, 00:24:13.770 "data_offset": 256, 00:24:13.770 "data_size": 7936 00:24:13.770 }, 00:24:13.770 { 00:24:13.770 "name": "BaseBdev2", 00:24:13.770 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:13.770 "is_configured": true, 00:24:13.770 "data_offset": 256, 00:24:13.771 "data_size": 7936 00:24:13.771 } 00:24:13.771 ] 00:24:13.771 }' 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:13.771 "name": "raid_bdev1", 00:24:13.771 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:13.771 "strip_size_kb": 0, 00:24:13.771 "state": "online", 00:24:13.771 "raid_level": "raid1", 00:24:13.771 "superblock": true, 00:24:13.771 "num_base_bdevs": 2, 00:24:13.771 "num_base_bdevs_discovered": 2, 00:24:13.771 "num_base_bdevs_operational": 2, 00:24:13.771 "base_bdevs_list": [ 00:24:13.771 { 00:24:13.771 "name": "spare", 00:24:13.771 "uuid": "f1afcea1-cb01-544b-88e3-70834eb18046", 00:24:13.771 "is_configured": true, 00:24:13.771 "data_offset": 256, 00:24:13.771 "data_size": 7936 00:24:13.771 }, 00:24:13.771 { 00:24:13.771 "name": "BaseBdev2", 00:24:13.771 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:13.771 "is_configured": true, 00:24:13.771 "data_offset": 256, 00:24:13.771 "data_size": 7936 00:24:13.771 } 00:24:13.771 ] 00:24:13.771 }' 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:13.771 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.029 "name": "raid_bdev1", 00:24:14.029 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:14.029 "strip_size_kb": 0, 00:24:14.029 "state": "online", 00:24:14.029 "raid_level": "raid1", 00:24:14.029 "superblock": true, 00:24:14.029 "num_base_bdevs": 2, 00:24:14.029 "num_base_bdevs_discovered": 2, 00:24:14.029 "num_base_bdevs_operational": 2, 00:24:14.029 "base_bdevs_list": [ 00:24:14.029 { 00:24:14.029 "name": "spare", 00:24:14.029 "uuid": "f1afcea1-cb01-544b-88e3-70834eb18046", 00:24:14.029 "is_configured": true, 00:24:14.029 "data_offset": 256, 00:24:14.029 "data_size": 7936 00:24:14.029 }, 00:24:14.029 { 00:24:14.029 "name": "BaseBdev2", 00:24:14.029 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:14.029 "is_configured": true, 00:24:14.029 "data_offset": 256, 00:24:14.029 "data_size": 7936 00:24:14.029 } 00:24:14.029 ] 00:24:14.029 }' 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.029 13:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:14.593 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:14.593 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.593 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:14.593 [2024-12-06 13:19:01.330607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:14.594 [2024-12-06 13:19:01.330660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:14.594 [2024-12-06 13:19:01.330782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:14.594 [2024-12-06 13:19:01.330890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:14.594 [2024-12-06 13:19:01.330912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:14.594 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:14.852 /dev/nbd0 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:14.852 1+0 records in 00:24:14.852 1+0 records out 00:24:14.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030733 s, 13.3 MB/s 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:14.852 13:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:15.109 /dev/nbd1 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:15.109 1+0 records in 00:24:15.109 1+0 records out 00:24:15.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320519 s, 12.8 MB/s 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:15.109 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:15.418 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:15.418 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:15.418 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:15.418 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:15.418 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:24:15.418 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:15.418 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:15.677 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:15.677 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:15.677 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:15.677 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:15.677 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:15.677 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:15.677 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:15.677 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:15.677 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:15.677 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:15.934 [2024-12-06 13:19:02.915076] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:15.934 [2024-12-06 13:19:02.915143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.934 [2024-12-06 13:19:02.915180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:15.934 [2024-12-06 13:19:02.915197] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.934 [2024-12-06 13:19:02.918600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.934 [2024-12-06 13:19:02.918644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:15.934 [2024-12-06 13:19:02.918782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:15.934 [2024-12-06 13:19:02.918874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:15.934 [2024-12-06 13:19:02.919100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:15.934 spare 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.934 13:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:16.192 [2024-12-06 13:19:03.019306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:16.192 [2024-12-06 13:19:03.019414] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:16.192 [2024-12-06 13:19:03.020002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:24:16.192 [2024-12-06 13:19:03.020359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:16.192 [2024-12-06 13:19:03.020386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:16.192 [2024-12-06 13:19:03.020676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:16.192 "name": "raid_bdev1", 00:24:16.192 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:16.192 "strip_size_kb": 0, 00:24:16.192 "state": "online", 00:24:16.192 "raid_level": "raid1", 00:24:16.192 "superblock": true, 00:24:16.192 "num_base_bdevs": 2, 00:24:16.192 "num_base_bdevs_discovered": 2, 00:24:16.192 "num_base_bdevs_operational": 2, 00:24:16.192 "base_bdevs_list": [ 00:24:16.192 { 00:24:16.192 "name": "spare", 00:24:16.192 "uuid": "f1afcea1-cb01-544b-88e3-70834eb18046", 00:24:16.192 "is_configured": true, 00:24:16.192 "data_offset": 256, 00:24:16.192 "data_size": 7936 00:24:16.192 }, 00:24:16.192 { 00:24:16.192 "name": "BaseBdev2", 00:24:16.192 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:16.192 "is_configured": true, 00:24:16.192 "data_offset": 256, 00:24:16.192 "data_size": 7936 00:24:16.192 } 00:24:16.192 ] 00:24:16.192 }' 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:16.192 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:16.756 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:16.756 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:16.756 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:16.756 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:16.756 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:16.756 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.756 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.756 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.756 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:16.756 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.756 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:16.756 "name": "raid_bdev1", 00:24:16.756 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:16.756 "strip_size_kb": 0, 00:24:16.756 "state": "online", 00:24:16.756 "raid_level": "raid1", 00:24:16.756 "superblock": true, 00:24:16.756 "num_base_bdevs": 2, 00:24:16.756 "num_base_bdevs_discovered": 2, 00:24:16.756 "num_base_bdevs_operational": 2, 00:24:16.756 "base_bdevs_list": [ 00:24:16.756 { 00:24:16.756 "name": "spare", 00:24:16.773 "uuid": "f1afcea1-cb01-544b-88e3-70834eb18046", 00:24:16.773 "is_configured": true, 00:24:16.773 "data_offset": 256, 00:24:16.773 "data_size": 7936 00:24:16.773 }, 00:24:16.773 { 00:24:16.773 "name": "BaseBdev2", 00:24:16.773 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:16.773 "is_configured": true, 00:24:16.773 "data_offset": 256, 00:24:16.773 "data_size": 7936 00:24:16.773 } 00:24:16.773 ] 00:24:16.773 }' 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:16.773 [2024-12-06 13:19:03.711532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:16.773 "name": "raid_bdev1", 00:24:16.773 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:16.773 "strip_size_kb": 0, 00:24:16.773 "state": "online", 00:24:16.773 "raid_level": "raid1", 00:24:16.773 "superblock": true, 00:24:16.773 "num_base_bdevs": 2, 00:24:16.773 "num_base_bdevs_discovered": 1, 00:24:16.773 "num_base_bdevs_operational": 1, 00:24:16.773 "base_bdevs_list": [ 00:24:16.773 { 00:24:16.773 "name": null, 00:24:16.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.773 "is_configured": false, 00:24:16.773 "data_offset": 0, 00:24:16.773 "data_size": 7936 00:24:16.773 }, 00:24:16.773 { 00:24:16.773 "name": "BaseBdev2", 00:24:16.773 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:16.773 "is_configured": true, 00:24:16.773 "data_offset": 256, 00:24:16.773 "data_size": 7936 00:24:16.773 } 00:24:16.773 ] 00:24:16.773 }' 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:16.773 13:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:17.339 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:17.339 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.339 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:17.339 [2024-12-06 13:19:04.239790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:17.339 [2024-12-06 13:19:04.240164] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:17.339 [2024-12-06 13:19:04.240193] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:17.339 [2024-12-06 13:19:04.240241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:17.339 [2024-12-06 13:19:04.257833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:24:17.339 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.339 13:19:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:17.339 [2024-12-06 13:19:04.260713] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:18.273 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.273 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:18.273 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:18.273 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:18.273 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:18.273 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.273 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.273 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:18.273 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.273 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:18.531 "name": "raid_bdev1", 00:24:18.531 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:18.531 "strip_size_kb": 0, 00:24:18.531 "state": "online", 00:24:18.531 "raid_level": "raid1", 00:24:18.531 "superblock": true, 00:24:18.531 "num_base_bdevs": 2, 00:24:18.531 "num_base_bdevs_discovered": 2, 00:24:18.531 "num_base_bdevs_operational": 2, 00:24:18.531 "process": { 00:24:18.531 "type": "rebuild", 00:24:18.531 "target": "spare", 00:24:18.531 "progress": { 00:24:18.531 "blocks": 2304, 00:24:18.531 "percent": 29 00:24:18.531 } 00:24:18.531 }, 00:24:18.531 "base_bdevs_list": [ 00:24:18.531 { 00:24:18.531 "name": "spare", 00:24:18.531 "uuid": "f1afcea1-cb01-544b-88e3-70834eb18046", 00:24:18.531 "is_configured": true, 00:24:18.531 "data_offset": 256, 00:24:18.531 "data_size": 7936 00:24:18.531 }, 00:24:18.531 { 00:24:18.531 "name": "BaseBdev2", 00:24:18.531 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:18.531 "is_configured": true, 00:24:18.531 "data_offset": 256, 00:24:18.531 "data_size": 7936 00:24:18.531 } 00:24:18.531 ] 00:24:18.531 }' 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 [2024-12-06 13:19:05.434891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:18.531 [2024-12-06 13:19:05.472886] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:18.531 [2024-12-06 13:19:05.473006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.531 [2024-12-06 13:19:05.473032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:18.531 [2024-12-06 13:19:05.473048] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.789 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:18.789 "name": "raid_bdev1", 00:24:18.789 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:18.789 "strip_size_kb": 0, 00:24:18.789 "state": "online", 00:24:18.789 "raid_level": "raid1", 00:24:18.789 "superblock": true, 00:24:18.789 "num_base_bdevs": 2, 00:24:18.789 "num_base_bdevs_discovered": 1, 00:24:18.789 "num_base_bdevs_operational": 1, 00:24:18.789 "base_bdevs_list": [ 00:24:18.789 { 00:24:18.789 "name": null, 00:24:18.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.789 "is_configured": false, 00:24:18.789 "data_offset": 0, 00:24:18.789 "data_size": 7936 00:24:18.789 }, 00:24:18.789 { 00:24:18.789 "name": "BaseBdev2", 00:24:18.789 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:18.789 "is_configured": true, 00:24:18.789 "data_offset": 256, 00:24:18.789 "data_size": 7936 00:24:18.789 } 00:24:18.789 ] 00:24:18.789 }' 00:24:18.789 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:18.789 13:19:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:19.047 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:19.047 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.047 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:19.047 [2024-12-06 13:19:06.053850] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:19.047 [2024-12-06 13:19:06.053987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.048 [2024-12-06 13:19:06.054023] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:19.048 [2024-12-06 13:19:06.054043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.048 [2024-12-06 13:19:06.054817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.048 [2024-12-06 13:19:06.054896] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:19.048 [2024-12-06 13:19:06.055062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:19.048 [2024-12-06 13:19:06.055095] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:19.048 [2024-12-06 13:19:06.055111] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:19.048 [2024-12-06 13:19:06.055151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:19.307 [2024-12-06 13:19:06.072156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:24:19.307 spare 00:24:19.307 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.307 13:19:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:19.307 [2024-12-06 13:19:06.075175] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:20.241 "name": "raid_bdev1", 00:24:20.241 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:20.241 "strip_size_kb": 0, 00:24:20.241 "state": "online", 00:24:20.241 "raid_level": "raid1", 00:24:20.241 "superblock": true, 00:24:20.241 "num_base_bdevs": 2, 00:24:20.241 "num_base_bdevs_discovered": 2, 00:24:20.241 "num_base_bdevs_operational": 2, 00:24:20.241 "process": { 00:24:20.241 "type": "rebuild", 00:24:20.241 "target": "spare", 00:24:20.241 "progress": { 00:24:20.241 "blocks": 2560, 00:24:20.241 "percent": 32 00:24:20.241 } 00:24:20.241 }, 00:24:20.241 "base_bdevs_list": [ 00:24:20.241 { 00:24:20.241 "name": "spare", 00:24:20.241 "uuid": "f1afcea1-cb01-544b-88e3-70834eb18046", 00:24:20.241 "is_configured": true, 00:24:20.241 "data_offset": 256, 00:24:20.241 "data_size": 7936 00:24:20.241 }, 00:24:20.241 { 00:24:20.241 "name": "BaseBdev2", 00:24:20.241 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:20.241 "is_configured": true, 00:24:20.241 "data_offset": 256, 00:24:20.241 "data_size": 7936 00:24:20.241 } 00:24:20.241 ] 00:24:20.241 }' 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.241 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.241 [2024-12-06 13:19:07.240320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:20.499 [2024-12-06 13:19:07.285745] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:20.499 [2024-12-06 13:19:07.285849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.499 [2024-12-06 13:19:07.285876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:20.499 [2024-12-06 13:19:07.285887] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.499 "name": "raid_bdev1", 00:24:20.499 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:20.499 "strip_size_kb": 0, 00:24:20.499 "state": "online", 00:24:20.499 "raid_level": "raid1", 00:24:20.499 "superblock": true, 00:24:20.499 "num_base_bdevs": 2, 00:24:20.499 "num_base_bdevs_discovered": 1, 00:24:20.499 "num_base_bdevs_operational": 1, 00:24:20.499 "base_bdevs_list": [ 00:24:20.499 { 00:24:20.499 "name": null, 00:24:20.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.499 "is_configured": false, 00:24:20.499 "data_offset": 0, 00:24:20.499 "data_size": 7936 00:24:20.499 }, 00:24:20.499 { 00:24:20.499 "name": "BaseBdev2", 00:24:20.499 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:20.499 "is_configured": true, 00:24:20.499 "data_offset": 256, 00:24:20.499 "data_size": 7936 00:24:20.499 } 00:24:20.499 ] 00:24:20.499 }' 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.499 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:21.066 "name": "raid_bdev1", 00:24:21.066 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:21.066 "strip_size_kb": 0, 00:24:21.066 "state": "online", 00:24:21.066 "raid_level": "raid1", 00:24:21.066 "superblock": true, 00:24:21.066 "num_base_bdevs": 2, 00:24:21.066 "num_base_bdevs_discovered": 1, 00:24:21.066 "num_base_bdevs_operational": 1, 00:24:21.066 "base_bdevs_list": [ 00:24:21.066 { 00:24:21.066 "name": null, 00:24:21.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.066 "is_configured": false, 00:24:21.066 "data_offset": 0, 00:24:21.066 "data_size": 7936 00:24:21.066 }, 00:24:21.066 { 00:24:21.066 "name": "BaseBdev2", 00:24:21.066 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:21.066 "is_configured": true, 00:24:21.066 "data_offset": 256, 00:24:21.066 "data_size": 7936 00:24:21.066 } 00:24:21.066 ] 00:24:21.066 }' 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.066 13:19:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.066 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.066 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:21.066 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.066 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:21.066 [2024-12-06 13:19:08.009173] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:21.066 [2024-12-06 13:19:08.009264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.066 [2024-12-06 13:19:08.009310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:21.066 [2024-12-06 13:19:08.009338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.066 [2024-12-06 13:19:08.010037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.066 [2024-12-06 13:19:08.010065] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:21.066 [2024-12-06 13:19:08.010182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:21.066 [2024-12-06 13:19:08.010210] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:21.066 [2024-12-06 13:19:08.010238] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:21.066 [2024-12-06 13:19:08.010254] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:21.066 BaseBdev1 00:24:21.066 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.066 13:19:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:22.441 "name": "raid_bdev1", 00:24:22.441 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:22.441 "strip_size_kb": 0, 00:24:22.441 "state": "online", 00:24:22.441 "raid_level": "raid1", 00:24:22.441 "superblock": true, 00:24:22.441 "num_base_bdevs": 2, 00:24:22.441 "num_base_bdevs_discovered": 1, 00:24:22.441 "num_base_bdevs_operational": 1, 00:24:22.441 "base_bdevs_list": [ 00:24:22.441 { 00:24:22.441 "name": null, 00:24:22.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.441 "is_configured": false, 00:24:22.441 "data_offset": 0, 00:24:22.441 "data_size": 7936 00:24:22.441 }, 00:24:22.441 { 00:24:22.441 "name": "BaseBdev2", 00:24:22.441 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:22.441 "is_configured": true, 00:24:22.441 "data_offset": 256, 00:24:22.441 "data_size": 7936 00:24:22.441 } 00:24:22.441 ] 00:24:22.441 }' 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:22.441 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:22.700 "name": "raid_bdev1", 00:24:22.700 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:22.700 "strip_size_kb": 0, 00:24:22.700 "state": "online", 00:24:22.700 "raid_level": "raid1", 00:24:22.700 "superblock": true, 00:24:22.700 "num_base_bdevs": 2, 00:24:22.700 "num_base_bdevs_discovered": 1, 00:24:22.700 "num_base_bdevs_operational": 1, 00:24:22.700 "base_bdevs_list": [ 00:24:22.700 { 00:24:22.700 "name": null, 00:24:22.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.700 "is_configured": false, 00:24:22.700 "data_offset": 0, 00:24:22.700 "data_size": 7936 00:24:22.700 }, 00:24:22.700 { 00:24:22.700 "name": "BaseBdev2", 00:24:22.700 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:22.700 "is_configured": true, 00:24:22.700 "data_offset": 256, 00:24:22.700 "data_size": 7936 00:24:22.700 } 00:24:22.700 ] 00:24:22.700 }' 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:22.700 [2024-12-06 13:19:09.693701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:22.700 [2024-12-06 13:19:09.694044] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:22.700 [2024-12-06 13:19:09.694080] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:22.700 request: 00:24:22.700 { 00:24:22.700 "base_bdev": "BaseBdev1", 00:24:22.700 "raid_bdev": "raid_bdev1", 00:24:22.700 "method": "bdev_raid_add_base_bdev", 00:24:22.700 "req_id": 1 00:24:22.700 } 00:24:22.700 Got JSON-RPC error response 00:24:22.700 response: 00:24:22.700 { 00:24:22.700 "code": -22, 00:24:22.700 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:22.700 } 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:22.700 13:19:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:24.074 "name": "raid_bdev1", 00:24:24.074 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:24.074 "strip_size_kb": 0, 00:24:24.074 "state": "online", 00:24:24.074 "raid_level": "raid1", 00:24:24.074 "superblock": true, 00:24:24.074 "num_base_bdevs": 2, 00:24:24.074 "num_base_bdevs_discovered": 1, 00:24:24.074 "num_base_bdevs_operational": 1, 00:24:24.074 "base_bdevs_list": [ 00:24:24.074 { 00:24:24.074 "name": null, 00:24:24.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.074 "is_configured": false, 00:24:24.074 "data_offset": 0, 00:24:24.074 "data_size": 7936 00:24:24.074 }, 00:24:24.074 { 00:24:24.074 "name": "BaseBdev2", 00:24:24.074 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:24.074 "is_configured": true, 00:24:24.074 "data_offset": 256, 00:24:24.074 "data_size": 7936 00:24:24.074 } 00:24:24.074 ] 00:24:24.074 }' 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:24.074 13:19:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.332 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:24.332 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:24.332 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:24.332 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:24.332 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:24.332 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.332 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.332 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:24.332 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.332 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.332 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:24.332 "name": "raid_bdev1", 00:24:24.332 "uuid": "5abe2332-cb89-45f2-b77f-8a2b7e142d7b", 00:24:24.332 "strip_size_kb": 0, 00:24:24.332 "state": "online", 00:24:24.332 "raid_level": "raid1", 00:24:24.332 "superblock": true, 00:24:24.332 "num_base_bdevs": 2, 00:24:24.332 "num_base_bdevs_discovered": 1, 00:24:24.332 "num_base_bdevs_operational": 1, 00:24:24.332 "base_bdevs_list": [ 00:24:24.332 { 00:24:24.332 "name": null, 00:24:24.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.332 "is_configured": false, 00:24:24.332 "data_offset": 0, 00:24:24.332 "data_size": 7936 00:24:24.332 }, 00:24:24.332 { 00:24:24.333 "name": "BaseBdev2", 00:24:24.333 "uuid": "772cd851-a436-5f30-9c0a-66bbc00dc4e8", 00:24:24.333 "is_configured": true, 00:24:24.333 "data_offset": 256, 00:24:24.333 "data_size": 7936 00:24:24.333 } 00:24:24.333 ] 00:24:24.333 }' 00:24:24.333 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:24.333 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:24.590 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:24.590 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:24.590 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87335 00:24:24.590 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87335 ']' 00:24:24.590 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87335 00:24:24.590 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:24:24.591 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.591 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87335 00:24:24.591 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.591 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.591 killing process with pid 87335 00:24:24.591 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87335' 00:24:24.591 Received shutdown signal, test time was about 60.000000 seconds 00:24:24.591 00:24:24.591 Latency(us) 00:24:24.591 [2024-12-06T13:19:11.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.591 [2024-12-06T13:19:11.607Z] =================================================================================================================== 00:24:24.591 [2024-12-06T13:19:11.607Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:24.591 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87335 00:24:24.591 [2024-12-06 13:19:11.424891] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:24.591 13:19:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87335 00:24:24.591 [2024-12-06 13:19:11.425096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:24.591 [2024-12-06 13:19:11.425188] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:24.591 [2024-12-06 13:19:11.425217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:24.849 [2024-12-06 13:19:11.694562] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:25.784 13:19:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:24:25.784 00:24:25.784 real 0m21.705s 00:24:25.784 user 0m29.220s 00:24:25.784 sys 0m2.691s 00:24:25.784 13:19:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.784 13:19:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:24:25.784 ************************************ 00:24:25.784 END TEST raid_rebuild_test_sb_4k 00:24:25.784 ************************************ 00:24:26.042 13:19:12 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:24:26.042 13:19:12 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:24:26.042 13:19:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:26.042 13:19:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.042 13:19:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:26.042 ************************************ 00:24:26.042 START TEST raid_state_function_test_sb_md_separate 00:24:26.042 ************************************ 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=88039 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:26.042 Process raid pid: 88039 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88039' 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 88039 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88039 ']' 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.042 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.043 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.043 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.043 13:19:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:26.043 [2024-12-06 13:19:12.973632] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:24:26.043 [2024-12-06 13:19:12.973808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.300 [2024-12-06 13:19:13.150915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.300 [2024-12-06 13:19:13.292757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.558 [2024-12-06 13:19:13.518407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:26.558 [2024-12-06 13:19:13.518496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:27.124 [2024-12-06 13:19:14.009417] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:27.124 [2024-12-06 13:19:14.009541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:27.124 [2024-12-06 13:19:14.009559] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:27.124 [2024-12-06 13:19:14.009577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.124 "name": "Existed_Raid", 00:24:27.124 "uuid": "1237ea23-d9ad-4750-8e40-4fedbf3345f0", 00:24:27.124 "strip_size_kb": 0, 00:24:27.124 "state": "configuring", 00:24:27.124 "raid_level": "raid1", 00:24:27.124 "superblock": true, 00:24:27.124 "num_base_bdevs": 2, 00:24:27.124 "num_base_bdevs_discovered": 0, 00:24:27.124 "num_base_bdevs_operational": 2, 00:24:27.124 "base_bdevs_list": [ 00:24:27.124 { 00:24:27.124 "name": "BaseBdev1", 00:24:27.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.124 "is_configured": false, 00:24:27.124 "data_offset": 0, 00:24:27.124 "data_size": 0 00:24:27.124 }, 00:24:27.124 { 00:24:27.124 "name": "BaseBdev2", 00:24:27.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.124 "is_configured": false, 00:24:27.124 "data_offset": 0, 00:24:27.124 "data_size": 0 00:24:27.124 } 00:24:27.124 ] 00:24:27.124 }' 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.124 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:27.690 [2024-12-06 13:19:14.533581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:27.690 [2024-12-06 13:19:14.533690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:27.690 [2024-12-06 13:19:14.541564] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:27.690 [2024-12-06 13:19:14.541629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:27.690 [2024-12-06 13:19:14.541644] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:27.690 [2024-12-06 13:19:14.541663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:27.690 [2024-12-06 13:19:14.590749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.690 BaseBdev1 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:27.690 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:27.691 [ 00:24:27.691 { 00:24:27.691 "name": "BaseBdev1", 00:24:27.691 "aliases": [ 00:24:27.691 "119c83ad-12c4-40d6-9f3b-de2b9d1b7618" 00:24:27.691 ], 00:24:27.691 "product_name": "Malloc disk", 00:24:27.691 "block_size": 4096, 00:24:27.691 "num_blocks": 8192, 00:24:27.691 "uuid": "119c83ad-12c4-40d6-9f3b-de2b9d1b7618", 00:24:27.691 "md_size": 32, 00:24:27.691 "md_interleave": false, 00:24:27.691 "dif_type": 0, 00:24:27.691 "assigned_rate_limits": { 00:24:27.691 "rw_ios_per_sec": 0, 00:24:27.691 "rw_mbytes_per_sec": 0, 00:24:27.691 "r_mbytes_per_sec": 0, 00:24:27.691 "w_mbytes_per_sec": 0 00:24:27.691 }, 00:24:27.691 "claimed": true, 00:24:27.691 "claim_type": "exclusive_write", 00:24:27.691 "zoned": false, 00:24:27.691 "supported_io_types": { 00:24:27.691 "read": true, 00:24:27.691 "write": true, 00:24:27.691 "unmap": true, 00:24:27.691 "flush": true, 00:24:27.691 "reset": true, 00:24:27.691 "nvme_admin": false, 00:24:27.691 "nvme_io": false, 00:24:27.691 "nvme_io_md": false, 00:24:27.691 "write_zeroes": true, 00:24:27.691 "zcopy": true, 00:24:27.691 "get_zone_info": false, 00:24:27.691 "zone_management": false, 00:24:27.691 "zone_append": false, 00:24:27.691 "compare": false, 00:24:27.691 "compare_and_write": false, 00:24:27.691 "abort": true, 00:24:27.691 "seek_hole": false, 00:24:27.691 "seek_data": false, 00:24:27.691 "copy": true, 00:24:27.691 "nvme_iov_md": false 00:24:27.691 }, 00:24:27.691 "memory_domains": [ 00:24:27.691 { 00:24:27.691 "dma_device_id": "system", 00:24:27.691 "dma_device_type": 1 00:24:27.691 }, 00:24:27.691 { 00:24:27.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.691 "dma_device_type": 2 00:24:27.691 } 00:24:27.691 ], 00:24:27.691 "driver_specific": {} 00:24:27.691 } 00:24:27.691 ] 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.691 "name": "Existed_Raid", 00:24:27.691 "uuid": "0cc99590-e40d-4228-afbf-5e3b274b860a", 00:24:27.691 "strip_size_kb": 0, 00:24:27.691 "state": "configuring", 00:24:27.691 "raid_level": "raid1", 00:24:27.691 "superblock": true, 00:24:27.691 "num_base_bdevs": 2, 00:24:27.691 "num_base_bdevs_discovered": 1, 00:24:27.691 "num_base_bdevs_operational": 2, 00:24:27.691 "base_bdevs_list": [ 00:24:27.691 { 00:24:27.691 "name": "BaseBdev1", 00:24:27.691 "uuid": "119c83ad-12c4-40d6-9f3b-de2b9d1b7618", 00:24:27.691 "is_configured": true, 00:24:27.691 "data_offset": 256, 00:24:27.691 "data_size": 7936 00:24:27.691 }, 00:24:27.691 { 00:24:27.691 "name": "BaseBdev2", 00:24:27.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.691 "is_configured": false, 00:24:27.691 "data_offset": 0, 00:24:27.691 "data_size": 0 00:24:27.691 } 00:24:27.691 ] 00:24:27.691 }' 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.691 13:19:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:28.259 [2024-12-06 13:19:15.151062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:28.259 [2024-12-06 13:19:15.151146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:28.259 [2024-12-06 13:19:15.159045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:28.259 [2024-12-06 13:19:15.161739] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:28.259 [2024-12-06 13:19:15.161796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:28.259 "name": "Existed_Raid", 00:24:28.259 "uuid": "22bc293e-9526-460c-9983-96ce21b72c14", 00:24:28.259 "strip_size_kb": 0, 00:24:28.259 "state": "configuring", 00:24:28.259 "raid_level": "raid1", 00:24:28.259 "superblock": true, 00:24:28.259 "num_base_bdevs": 2, 00:24:28.259 "num_base_bdevs_discovered": 1, 00:24:28.259 "num_base_bdevs_operational": 2, 00:24:28.259 "base_bdevs_list": [ 00:24:28.259 { 00:24:28.259 "name": "BaseBdev1", 00:24:28.259 "uuid": "119c83ad-12c4-40d6-9f3b-de2b9d1b7618", 00:24:28.259 "is_configured": true, 00:24:28.259 "data_offset": 256, 00:24:28.259 "data_size": 7936 00:24:28.259 }, 00:24:28.259 { 00:24:28.259 "name": "BaseBdev2", 00:24:28.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.259 "is_configured": false, 00:24:28.259 "data_offset": 0, 00:24:28.259 "data_size": 0 00:24:28.259 } 00:24:28.259 ] 00:24:28.259 }' 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:28.259 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:28.827 [2024-12-06 13:19:15.740285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:28.827 [2024-12-06 13:19:15.740656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:28.827 [2024-12-06 13:19:15.740680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:28.827 [2024-12-06 13:19:15.740807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:28.827 [2024-12-06 13:19:15.741009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:28.827 [2024-12-06 13:19:15.741030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:28.827 BaseBdev2 00:24:28.827 [2024-12-06 13:19:15.741152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:28.827 [ 00:24:28.827 { 00:24:28.827 "name": "BaseBdev2", 00:24:28.827 "aliases": [ 00:24:28.827 "e8a8f276-5287-417b-9e1f-ea1a46d4aed0" 00:24:28.827 ], 00:24:28.827 "product_name": "Malloc disk", 00:24:28.827 "block_size": 4096, 00:24:28.827 "num_blocks": 8192, 00:24:28.827 "uuid": "e8a8f276-5287-417b-9e1f-ea1a46d4aed0", 00:24:28.827 "md_size": 32, 00:24:28.827 "md_interleave": false, 00:24:28.827 "dif_type": 0, 00:24:28.827 "assigned_rate_limits": { 00:24:28.827 "rw_ios_per_sec": 0, 00:24:28.827 "rw_mbytes_per_sec": 0, 00:24:28.827 "r_mbytes_per_sec": 0, 00:24:28.827 "w_mbytes_per_sec": 0 00:24:28.827 }, 00:24:28.827 "claimed": true, 00:24:28.827 "claim_type": "exclusive_write", 00:24:28.827 "zoned": false, 00:24:28.827 "supported_io_types": { 00:24:28.827 "read": true, 00:24:28.827 "write": true, 00:24:28.827 "unmap": true, 00:24:28.827 "flush": true, 00:24:28.827 "reset": true, 00:24:28.827 "nvme_admin": false, 00:24:28.827 "nvme_io": false, 00:24:28.827 "nvme_io_md": false, 00:24:28.827 "write_zeroes": true, 00:24:28.827 "zcopy": true, 00:24:28.827 "get_zone_info": false, 00:24:28.827 "zone_management": false, 00:24:28.827 "zone_append": false, 00:24:28.827 "compare": false, 00:24:28.827 "compare_and_write": false, 00:24:28.827 "abort": true, 00:24:28.827 "seek_hole": false, 00:24:28.827 "seek_data": false, 00:24:28.827 "copy": true, 00:24:28.827 "nvme_iov_md": false 00:24:28.827 }, 00:24:28.827 "memory_domains": [ 00:24:28.827 { 00:24:28.827 "dma_device_id": "system", 00:24:28.827 "dma_device_type": 1 00:24:28.827 }, 00:24:28.827 { 00:24:28.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.827 "dma_device_type": 2 00:24:28.827 } 00:24:28.827 ], 00:24:28.827 "driver_specific": {} 00:24:28.827 } 00:24:28.827 ] 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:28.827 "name": "Existed_Raid", 00:24:28.827 "uuid": "22bc293e-9526-460c-9983-96ce21b72c14", 00:24:28.827 "strip_size_kb": 0, 00:24:28.827 "state": "online", 00:24:28.827 "raid_level": "raid1", 00:24:28.827 "superblock": true, 00:24:28.827 "num_base_bdevs": 2, 00:24:28.827 "num_base_bdevs_discovered": 2, 00:24:28.827 "num_base_bdevs_operational": 2, 00:24:28.827 "base_bdevs_list": [ 00:24:28.827 { 00:24:28.827 "name": "BaseBdev1", 00:24:28.827 "uuid": "119c83ad-12c4-40d6-9f3b-de2b9d1b7618", 00:24:28.827 "is_configured": true, 00:24:28.827 "data_offset": 256, 00:24:28.827 "data_size": 7936 00:24:28.827 }, 00:24:28.827 { 00:24:28.827 "name": "BaseBdev2", 00:24:28.827 "uuid": "e8a8f276-5287-417b-9e1f-ea1a46d4aed0", 00:24:28.827 "is_configured": true, 00:24:28.827 "data_offset": 256, 00:24:28.827 "data_size": 7936 00:24:28.827 } 00:24:28.827 ] 00:24:28.827 }' 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:28.827 13:19:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:29.393 [2024-12-06 13:19:16.321119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.393 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:29.393 "name": "Existed_Raid", 00:24:29.393 "aliases": [ 00:24:29.393 "22bc293e-9526-460c-9983-96ce21b72c14" 00:24:29.393 ], 00:24:29.393 "product_name": "Raid Volume", 00:24:29.393 "block_size": 4096, 00:24:29.393 "num_blocks": 7936, 00:24:29.393 "uuid": "22bc293e-9526-460c-9983-96ce21b72c14", 00:24:29.393 "md_size": 32, 00:24:29.393 "md_interleave": false, 00:24:29.393 "dif_type": 0, 00:24:29.393 "assigned_rate_limits": { 00:24:29.393 "rw_ios_per_sec": 0, 00:24:29.393 "rw_mbytes_per_sec": 0, 00:24:29.393 "r_mbytes_per_sec": 0, 00:24:29.393 "w_mbytes_per_sec": 0 00:24:29.393 }, 00:24:29.393 "claimed": false, 00:24:29.393 "zoned": false, 00:24:29.393 "supported_io_types": { 00:24:29.393 "read": true, 00:24:29.393 "write": true, 00:24:29.393 "unmap": false, 00:24:29.393 "flush": false, 00:24:29.393 "reset": true, 00:24:29.393 "nvme_admin": false, 00:24:29.393 "nvme_io": false, 00:24:29.393 "nvme_io_md": false, 00:24:29.393 "write_zeroes": true, 00:24:29.393 "zcopy": false, 00:24:29.393 "get_zone_info": false, 00:24:29.393 "zone_management": false, 00:24:29.393 "zone_append": false, 00:24:29.393 "compare": false, 00:24:29.393 "compare_and_write": false, 00:24:29.393 "abort": false, 00:24:29.393 "seek_hole": false, 00:24:29.393 "seek_data": false, 00:24:29.393 "copy": false, 00:24:29.393 "nvme_iov_md": false 00:24:29.393 }, 00:24:29.393 "memory_domains": [ 00:24:29.393 { 00:24:29.393 "dma_device_id": "system", 00:24:29.393 "dma_device_type": 1 00:24:29.393 }, 00:24:29.393 { 00:24:29.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.393 "dma_device_type": 2 00:24:29.393 }, 00:24:29.393 { 00:24:29.393 "dma_device_id": "system", 00:24:29.393 "dma_device_type": 1 00:24:29.393 }, 00:24:29.393 { 00:24:29.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.393 "dma_device_type": 2 00:24:29.393 } 00:24:29.393 ], 00:24:29.393 "driver_specific": { 00:24:29.393 "raid": { 00:24:29.393 "uuid": "22bc293e-9526-460c-9983-96ce21b72c14", 00:24:29.393 "strip_size_kb": 0, 00:24:29.393 "state": "online", 00:24:29.393 "raid_level": "raid1", 00:24:29.393 "superblock": true, 00:24:29.393 "num_base_bdevs": 2, 00:24:29.393 "num_base_bdevs_discovered": 2, 00:24:29.394 "num_base_bdevs_operational": 2, 00:24:29.394 "base_bdevs_list": [ 00:24:29.394 { 00:24:29.394 "name": "BaseBdev1", 00:24:29.394 "uuid": "119c83ad-12c4-40d6-9f3b-de2b9d1b7618", 00:24:29.394 "is_configured": true, 00:24:29.394 "data_offset": 256, 00:24:29.394 "data_size": 7936 00:24:29.394 }, 00:24:29.394 { 00:24:29.394 "name": "BaseBdev2", 00:24:29.394 "uuid": "e8a8f276-5287-417b-9e1f-ea1a46d4aed0", 00:24:29.394 "is_configured": true, 00:24:29.394 "data_offset": 256, 00:24:29.394 "data_size": 7936 00:24:29.394 } 00:24:29.394 ] 00:24:29.394 } 00:24:29.394 } 00:24:29.394 }' 00:24:29.394 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:29.652 BaseBdev2' 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.652 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:29.652 [2024-12-06 13:19:16.588785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:29.911 "name": "Existed_Raid", 00:24:29.911 "uuid": "22bc293e-9526-460c-9983-96ce21b72c14", 00:24:29.911 "strip_size_kb": 0, 00:24:29.911 "state": "online", 00:24:29.911 "raid_level": "raid1", 00:24:29.911 "superblock": true, 00:24:29.911 "num_base_bdevs": 2, 00:24:29.911 "num_base_bdevs_discovered": 1, 00:24:29.911 "num_base_bdevs_operational": 1, 00:24:29.911 "base_bdevs_list": [ 00:24:29.911 { 00:24:29.911 "name": null, 00:24:29.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.911 "is_configured": false, 00:24:29.911 "data_offset": 0, 00:24:29.911 "data_size": 7936 00:24:29.911 }, 00:24:29.911 { 00:24:29.911 "name": "BaseBdev2", 00:24:29.911 "uuid": "e8a8f276-5287-417b-9e1f-ea1a46d4aed0", 00:24:29.911 "is_configured": true, 00:24:29.911 "data_offset": 256, 00:24:29.911 "data_size": 7936 00:24:29.911 } 00:24:29.911 ] 00:24:29.911 }' 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:29.911 13:19:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.479 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:30.479 [2024-12-06 13:19:17.281854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:30.479 [2024-12-06 13:19:17.282058] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:30.479 [2024-12-06 13:19:17.371631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:30.479 [2024-12-06 13:19:17.371740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:30.479 [2024-12-06 13:19:17.371762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 88039 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88039 ']' 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88039 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88039 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:30.480 killing process with pid 88039 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88039' 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88039 00:24:30.480 [2024-12-06 13:19:17.468976] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:30.480 13:19:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88039 00:24:30.480 [2024-12-06 13:19:17.484656] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:31.857 13:19:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:24:31.857 00:24:31.857 real 0m5.684s 00:24:31.857 user 0m8.504s 00:24:31.857 sys 0m0.927s 00:24:31.857 13:19:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.857 13:19:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.857 ************************************ 00:24:31.857 END TEST raid_state_function_test_sb_md_separate 00:24:31.857 ************************************ 00:24:31.857 13:19:18 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:24:31.857 13:19:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:31.857 13:19:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.857 13:19:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:31.857 ************************************ 00:24:31.857 START TEST raid_superblock_test_md_separate 00:24:31.857 ************************************ 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88292 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88292 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88292 ']' 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:31.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.857 13:19:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:31.857 [2024-12-06 13:19:18.724390] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:24:31.857 [2024-12-06 13:19:18.724593] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88292 ] 00:24:32.116 [2024-12-06 13:19:18.913414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.116 [2024-12-06 13:19:19.053211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.374 [2024-12-06 13:19:19.273168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:32.374 [2024-12-06 13:19:19.273268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:32.940 malloc1 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:32.940 [2024-12-06 13:19:19.737714] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:32.940 [2024-12-06 13:19:19.737808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.940 [2024-12-06 13:19:19.737843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:32.940 [2024-12-06 13:19:19.737859] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.940 [2024-12-06 13:19:19.740620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.940 [2024-12-06 13:19:19.740676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:32.940 pt1 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:32.940 malloc2 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:32.940 [2024-12-06 13:19:19.787893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:32.940 [2024-12-06 13:19:19.787979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.940 [2024-12-06 13:19:19.788013] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:32.940 [2024-12-06 13:19:19.788029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.940 [2024-12-06 13:19:19.790739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.940 [2024-12-06 13:19:19.790795] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:32.940 pt2 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.940 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:32.940 [2024-12-06 13:19:19.795912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:32.940 [2024-12-06 13:19:19.798418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:32.940 [2024-12-06 13:19:19.798741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:32.940 [2024-12-06 13:19:19.798774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:32.940 [2024-12-06 13:19:19.798874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:32.940 [2024-12-06 13:19:19.799066] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:32.941 [2024-12-06 13:19:19.799098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:32.941 [2024-12-06 13:19:19.799234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.941 "name": "raid_bdev1", 00:24:32.941 "uuid": "069525d3-b2b2-49b0-8df6-704262c4442d", 00:24:32.941 "strip_size_kb": 0, 00:24:32.941 "state": "online", 00:24:32.941 "raid_level": "raid1", 00:24:32.941 "superblock": true, 00:24:32.941 "num_base_bdevs": 2, 00:24:32.941 "num_base_bdevs_discovered": 2, 00:24:32.941 "num_base_bdevs_operational": 2, 00:24:32.941 "base_bdevs_list": [ 00:24:32.941 { 00:24:32.941 "name": "pt1", 00:24:32.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:32.941 "is_configured": true, 00:24:32.941 "data_offset": 256, 00:24:32.941 "data_size": 7936 00:24:32.941 }, 00:24:32.941 { 00:24:32.941 "name": "pt2", 00:24:32.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:32.941 "is_configured": true, 00:24:32.941 "data_offset": 256, 00:24:32.941 "data_size": 7936 00:24:32.941 } 00:24:32.941 ] 00:24:32.941 }' 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.941 13:19:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.507 [2024-12-06 13:19:20.308512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:33.507 "name": "raid_bdev1", 00:24:33.507 "aliases": [ 00:24:33.507 "069525d3-b2b2-49b0-8df6-704262c4442d" 00:24:33.507 ], 00:24:33.507 "product_name": "Raid Volume", 00:24:33.507 "block_size": 4096, 00:24:33.507 "num_blocks": 7936, 00:24:33.507 "uuid": "069525d3-b2b2-49b0-8df6-704262c4442d", 00:24:33.507 "md_size": 32, 00:24:33.507 "md_interleave": false, 00:24:33.507 "dif_type": 0, 00:24:33.507 "assigned_rate_limits": { 00:24:33.507 "rw_ios_per_sec": 0, 00:24:33.507 "rw_mbytes_per_sec": 0, 00:24:33.507 "r_mbytes_per_sec": 0, 00:24:33.507 "w_mbytes_per_sec": 0 00:24:33.507 }, 00:24:33.507 "claimed": false, 00:24:33.507 "zoned": false, 00:24:33.507 "supported_io_types": { 00:24:33.507 "read": true, 00:24:33.507 "write": true, 00:24:33.507 "unmap": false, 00:24:33.507 "flush": false, 00:24:33.507 "reset": true, 00:24:33.507 "nvme_admin": false, 00:24:33.507 "nvme_io": false, 00:24:33.507 "nvme_io_md": false, 00:24:33.507 "write_zeroes": true, 00:24:33.507 "zcopy": false, 00:24:33.507 "get_zone_info": false, 00:24:33.507 "zone_management": false, 00:24:33.507 "zone_append": false, 00:24:33.507 "compare": false, 00:24:33.507 "compare_and_write": false, 00:24:33.507 "abort": false, 00:24:33.507 "seek_hole": false, 00:24:33.507 "seek_data": false, 00:24:33.507 "copy": false, 00:24:33.507 "nvme_iov_md": false 00:24:33.507 }, 00:24:33.507 "memory_domains": [ 00:24:33.507 { 00:24:33.507 "dma_device_id": "system", 00:24:33.507 "dma_device_type": 1 00:24:33.507 }, 00:24:33.507 { 00:24:33.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.507 "dma_device_type": 2 00:24:33.507 }, 00:24:33.507 { 00:24:33.507 "dma_device_id": "system", 00:24:33.507 "dma_device_type": 1 00:24:33.507 }, 00:24:33.507 { 00:24:33.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.507 "dma_device_type": 2 00:24:33.507 } 00:24:33.507 ], 00:24:33.507 "driver_specific": { 00:24:33.507 "raid": { 00:24:33.507 "uuid": "069525d3-b2b2-49b0-8df6-704262c4442d", 00:24:33.507 "strip_size_kb": 0, 00:24:33.507 "state": "online", 00:24:33.507 "raid_level": "raid1", 00:24:33.507 "superblock": true, 00:24:33.507 "num_base_bdevs": 2, 00:24:33.507 "num_base_bdevs_discovered": 2, 00:24:33.507 "num_base_bdevs_operational": 2, 00:24:33.507 "base_bdevs_list": [ 00:24:33.507 { 00:24:33.507 "name": "pt1", 00:24:33.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:33.507 "is_configured": true, 00:24:33.507 "data_offset": 256, 00:24:33.507 "data_size": 7936 00:24:33.507 }, 00:24:33.507 { 00:24:33.507 "name": "pt2", 00:24:33.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:33.507 "is_configured": true, 00:24:33.507 "data_offset": 256, 00:24:33.507 "data_size": 7936 00:24:33.507 } 00:24:33.507 ] 00:24:33.507 } 00:24:33.507 } 00:24:33.507 }' 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:33.507 pt2' 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.507 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:33.878 [2024-12-06 13:19:20.568391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=069525d3-b2b2-49b0-8df6-704262c4442d 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 069525d3-b2b2-49b0-8df6-704262c4442d ']' 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.878 [2024-12-06 13:19:20.616106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.878 [2024-12-06 13:19:20.616138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.878 [2024-12-06 13:19:20.616256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.878 [2024-12-06 13:19:20.616338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.878 [2024-12-06 13:19:20.616373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.878 [2024-12-06 13:19:20.748172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:33.878 [2024-12-06 13:19:20.751064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:33.878 [2024-12-06 13:19:20.751179] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:33.878 [2024-12-06 13:19:20.751257] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:33.878 [2024-12-06 13:19:20.751285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.878 [2024-12-06 13:19:20.751302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:33.878 request: 00:24:33.878 { 00:24:33.878 "name": "raid_bdev1", 00:24:33.878 "raid_level": "raid1", 00:24:33.878 "base_bdevs": [ 00:24:33.878 "malloc1", 00:24:33.878 "malloc2" 00:24:33.878 ], 00:24:33.878 "superblock": false, 00:24:33.878 "method": "bdev_raid_create", 00:24:33.878 "req_id": 1 00:24:33.878 } 00:24:33.878 Got JSON-RPC error response 00:24:33.878 response: 00:24:33.878 { 00:24:33.878 "code": -17, 00:24:33.878 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:33.878 } 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:33.878 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.879 [2024-12-06 13:19:20.800157] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:33.879 [2024-12-06 13:19:20.800232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.879 [2024-12-06 13:19:20.800258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:33.879 [2024-12-06 13:19:20.800276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.879 [2024-12-06 13:19:20.803247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.879 [2024-12-06 13:19:20.803296] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:33.879 [2024-12-06 13:19:20.803373] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:33.879 [2024-12-06 13:19:20.803447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:33.879 pt1 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:33.879 "name": "raid_bdev1", 00:24:33.879 "uuid": "069525d3-b2b2-49b0-8df6-704262c4442d", 00:24:33.879 "strip_size_kb": 0, 00:24:33.879 "state": "configuring", 00:24:33.879 "raid_level": "raid1", 00:24:33.879 "superblock": true, 00:24:33.879 "num_base_bdevs": 2, 00:24:33.879 "num_base_bdevs_discovered": 1, 00:24:33.879 "num_base_bdevs_operational": 2, 00:24:33.879 "base_bdevs_list": [ 00:24:33.879 { 00:24:33.879 "name": "pt1", 00:24:33.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:33.879 "is_configured": true, 00:24:33.879 "data_offset": 256, 00:24:33.879 "data_size": 7936 00:24:33.879 }, 00:24:33.879 { 00:24:33.879 "name": null, 00:24:33.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:33.879 "is_configured": false, 00:24:33.879 "data_offset": 256, 00:24:33.879 "data_size": 7936 00:24:33.879 } 00:24:33.879 ] 00:24:33.879 }' 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:33.879 13:19:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:34.446 [2024-12-06 13:19:21.316396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:34.446 [2024-12-06 13:19:21.316538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.446 [2024-12-06 13:19:21.316579] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:34.446 [2024-12-06 13:19:21.316600] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.446 [2024-12-06 13:19:21.316935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.446 [2024-12-06 13:19:21.316970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:34.446 [2024-12-06 13:19:21.317050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:34.446 [2024-12-06 13:19:21.317091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:34.446 [2024-12-06 13:19:21.317248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:34.446 [2024-12-06 13:19:21.317272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:34.446 [2024-12-06 13:19:21.317374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:34.446 [2024-12-06 13:19:21.317557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:34.446 [2024-12-06 13:19:21.317573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:34.446 [2024-12-06 13:19:21.317708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.446 pt2 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.446 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.446 "name": "raid_bdev1", 00:24:34.446 "uuid": "069525d3-b2b2-49b0-8df6-704262c4442d", 00:24:34.446 "strip_size_kb": 0, 00:24:34.446 "state": "online", 00:24:34.446 "raid_level": "raid1", 00:24:34.446 "superblock": true, 00:24:34.446 "num_base_bdevs": 2, 00:24:34.446 "num_base_bdevs_discovered": 2, 00:24:34.446 "num_base_bdevs_operational": 2, 00:24:34.446 "base_bdevs_list": [ 00:24:34.446 { 00:24:34.446 "name": "pt1", 00:24:34.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:34.447 "is_configured": true, 00:24:34.447 "data_offset": 256, 00:24:34.447 "data_size": 7936 00:24:34.447 }, 00:24:34.447 { 00:24:34.447 "name": "pt2", 00:24:34.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:34.447 "is_configured": true, 00:24:34.447 "data_offset": 256, 00:24:34.447 "data_size": 7936 00:24:34.447 } 00:24:34.447 ] 00:24:34.447 }' 00:24:34.447 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.447 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.012 [2024-12-06 13:19:21.848919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:35.012 "name": "raid_bdev1", 00:24:35.012 "aliases": [ 00:24:35.012 "069525d3-b2b2-49b0-8df6-704262c4442d" 00:24:35.012 ], 00:24:35.012 "product_name": "Raid Volume", 00:24:35.012 "block_size": 4096, 00:24:35.012 "num_blocks": 7936, 00:24:35.012 "uuid": "069525d3-b2b2-49b0-8df6-704262c4442d", 00:24:35.012 "md_size": 32, 00:24:35.012 "md_interleave": false, 00:24:35.012 "dif_type": 0, 00:24:35.012 "assigned_rate_limits": { 00:24:35.012 "rw_ios_per_sec": 0, 00:24:35.012 "rw_mbytes_per_sec": 0, 00:24:35.012 "r_mbytes_per_sec": 0, 00:24:35.012 "w_mbytes_per_sec": 0 00:24:35.012 }, 00:24:35.012 "claimed": false, 00:24:35.012 "zoned": false, 00:24:35.012 "supported_io_types": { 00:24:35.012 "read": true, 00:24:35.012 "write": true, 00:24:35.012 "unmap": false, 00:24:35.012 "flush": false, 00:24:35.012 "reset": true, 00:24:35.012 "nvme_admin": false, 00:24:35.012 "nvme_io": false, 00:24:35.012 "nvme_io_md": false, 00:24:35.012 "write_zeroes": true, 00:24:35.012 "zcopy": false, 00:24:35.012 "get_zone_info": false, 00:24:35.012 "zone_management": false, 00:24:35.012 "zone_append": false, 00:24:35.012 "compare": false, 00:24:35.012 "compare_and_write": false, 00:24:35.012 "abort": false, 00:24:35.012 "seek_hole": false, 00:24:35.012 "seek_data": false, 00:24:35.012 "copy": false, 00:24:35.012 "nvme_iov_md": false 00:24:35.012 }, 00:24:35.012 "memory_domains": [ 00:24:35.012 { 00:24:35.012 "dma_device_id": "system", 00:24:35.012 "dma_device_type": 1 00:24:35.012 }, 00:24:35.012 { 00:24:35.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.012 "dma_device_type": 2 00:24:35.012 }, 00:24:35.012 { 00:24:35.012 "dma_device_id": "system", 00:24:35.012 "dma_device_type": 1 00:24:35.012 }, 00:24:35.012 { 00:24:35.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.012 "dma_device_type": 2 00:24:35.012 } 00:24:35.012 ], 00:24:35.012 "driver_specific": { 00:24:35.012 "raid": { 00:24:35.012 "uuid": "069525d3-b2b2-49b0-8df6-704262c4442d", 00:24:35.012 "strip_size_kb": 0, 00:24:35.012 "state": "online", 00:24:35.012 "raid_level": "raid1", 00:24:35.012 "superblock": true, 00:24:35.012 "num_base_bdevs": 2, 00:24:35.012 "num_base_bdevs_discovered": 2, 00:24:35.012 "num_base_bdevs_operational": 2, 00:24:35.012 "base_bdevs_list": [ 00:24:35.012 { 00:24:35.012 "name": "pt1", 00:24:35.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:35.012 "is_configured": true, 00:24:35.012 "data_offset": 256, 00:24:35.012 "data_size": 7936 00:24:35.012 }, 00:24:35.012 { 00:24:35.012 "name": "pt2", 00:24:35.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:35.012 "is_configured": true, 00:24:35.012 "data_offset": 256, 00:24:35.012 "data_size": 7936 00:24:35.012 } 00:24:35.012 ] 00:24:35.012 } 00:24:35.012 } 00:24:35.012 }' 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:35.012 pt2' 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.012 13:19:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.012 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.271 [2024-12-06 13:19:22.109001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 069525d3-b2b2-49b0-8df6-704262c4442d '!=' 069525d3-b2b2-49b0-8df6-704262c4442d ']' 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.271 [2024-12-06 13:19:22.156672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:35.271 "name": "raid_bdev1", 00:24:35.271 "uuid": "069525d3-b2b2-49b0-8df6-704262c4442d", 00:24:35.271 "strip_size_kb": 0, 00:24:35.271 "state": "online", 00:24:35.271 "raid_level": "raid1", 00:24:35.271 "superblock": true, 00:24:35.271 "num_base_bdevs": 2, 00:24:35.271 "num_base_bdevs_discovered": 1, 00:24:35.271 "num_base_bdevs_operational": 1, 00:24:35.271 "base_bdevs_list": [ 00:24:35.271 { 00:24:35.271 "name": null, 00:24:35.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.271 "is_configured": false, 00:24:35.271 "data_offset": 0, 00:24:35.271 "data_size": 7936 00:24:35.271 }, 00:24:35.271 { 00:24:35.271 "name": "pt2", 00:24:35.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:35.271 "is_configured": true, 00:24:35.271 "data_offset": 256, 00:24:35.271 "data_size": 7936 00:24:35.271 } 00:24:35.271 ] 00:24:35.271 }' 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:35.271 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.838 [2024-12-06 13:19:22.684826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:35.838 [2024-12-06 13:19:22.684883] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:35.838 [2024-12-06 13:19:22.685022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:35.838 [2024-12-06 13:19:22.685100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:35.838 [2024-12-06 13:19:22.685121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.838 [2024-12-06 13:19:22.748772] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:35.838 [2024-12-06 13:19:22.748843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.838 [2024-12-06 13:19:22.748870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:35.838 [2024-12-06 13:19:22.748889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.838 [2024-12-06 13:19:22.752118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.838 [2024-12-06 13:19:22.752180] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:35.838 [2024-12-06 13:19:22.752258] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:35.838 [2024-12-06 13:19:22.752328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:35.838 [2024-12-06 13:19:22.752456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:35.838 [2024-12-06 13:19:22.752496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:35.838 [2024-12-06 13:19:22.752612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:35.838 [2024-12-06 13:19:22.752782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:35.838 [2024-12-06 13:19:22.752797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:35.838 [2024-12-06 13:19:22.752977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:35.838 pt2 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:35.838 "name": "raid_bdev1", 00:24:35.838 "uuid": "069525d3-b2b2-49b0-8df6-704262c4442d", 00:24:35.838 "strip_size_kb": 0, 00:24:35.838 "state": "online", 00:24:35.838 "raid_level": "raid1", 00:24:35.838 "superblock": true, 00:24:35.838 "num_base_bdevs": 2, 00:24:35.838 "num_base_bdevs_discovered": 1, 00:24:35.838 "num_base_bdevs_operational": 1, 00:24:35.838 "base_bdevs_list": [ 00:24:35.838 { 00:24:35.838 "name": null, 00:24:35.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.838 "is_configured": false, 00:24:35.838 "data_offset": 256, 00:24:35.838 "data_size": 7936 00:24:35.838 }, 00:24:35.838 { 00:24:35.838 "name": "pt2", 00:24:35.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:35.838 "is_configured": true, 00:24:35.838 "data_offset": 256, 00:24:35.838 "data_size": 7936 00:24:35.838 } 00:24:35.838 ] 00:24:35.838 }' 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:35.838 13:19:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:36.405 [2024-12-06 13:19:23.281157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:36.405 [2024-12-06 13:19:23.281230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:36.405 [2024-12-06 13:19:23.281337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:36.405 [2024-12-06 13:19:23.281415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:36.405 [2024-12-06 13:19:23.281431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.405 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:36.405 [2024-12-06 13:19:23.345183] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:36.405 [2024-12-06 13:19:23.345515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.405 [2024-12-06 13:19:23.345686] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:36.405 [2024-12-06 13:19:23.345821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.405 [2024-12-06 13:19:23.349026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.405 [2024-12-06 13:19:23.349248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:36.405 [2024-12-06 13:19:23.349446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:36.405 [2024-12-06 13:19:23.349655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:36.405 [2024-12-06 13:19:23.350037] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:36.405 [2024-12-06 13:19:23.350222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:36.405 pt1 00:24:36.405 [2024-12-06 13:19:23.350344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:36.405 [2024-12-06 13:19:23.350444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:36.406 [2024-12-06 13:19:23.350723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:36.406 [2024-12-06 13:19:23.350756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:36.406 [2024-12-06 13:19:23.350877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.406 [2024-12-06 13:19:23.351067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:36.406 [2024-12-06 13:19:23.351089] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:36.406 [2024-12-06 13:19:23.351232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:36.406 "name": "raid_bdev1", 00:24:36.406 "uuid": "069525d3-b2b2-49b0-8df6-704262c4442d", 00:24:36.406 "strip_size_kb": 0, 00:24:36.406 "state": "online", 00:24:36.406 "raid_level": "raid1", 00:24:36.406 "superblock": true, 00:24:36.406 "num_base_bdevs": 2, 00:24:36.406 "num_base_bdevs_discovered": 1, 00:24:36.406 "num_base_bdevs_operational": 1, 00:24:36.406 "base_bdevs_list": [ 00:24:36.406 { 00:24:36.406 "name": null, 00:24:36.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.406 "is_configured": false, 00:24:36.406 "data_offset": 256, 00:24:36.406 "data_size": 7936 00:24:36.406 }, 00:24:36.406 { 00:24:36.406 "name": "pt2", 00:24:36.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:36.406 "is_configured": true, 00:24:36.406 "data_offset": 256, 00:24:36.406 "data_size": 7936 00:24:36.406 } 00:24:36.406 ] 00:24:36.406 }' 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:36.406 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:36.973 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:36.973 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:36.973 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.973 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:36.973 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.973 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:36.973 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:36.973 13:19:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:36.973 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.973 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:36.973 [2024-12-06 13:19:23.974218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:37.231 13:19:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 069525d3-b2b2-49b0-8df6-704262c4442d '!=' 069525d3-b2b2-49b0-8df6-704262c4442d ']' 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88292 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88292 ']' 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88292 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88292 00:24:37.231 killing process with pid 88292 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88292' 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88292 00:24:37.231 [2024-12-06 13:19:24.044666] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:37.231 13:19:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88292 00:24:37.231 [2024-12-06 13:19:24.044793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:37.231 [2024-12-06 13:19:24.044896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:37.231 [2024-12-06 13:19:24.044923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:37.231 [2024-12-06 13:19:24.233568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:38.604 ************************************ 00:24:38.604 END TEST raid_superblock_test_md_separate 00:24:38.604 ************************************ 00:24:38.604 13:19:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:24:38.604 00:24:38.604 real 0m6.714s 00:24:38.604 user 0m10.559s 00:24:38.604 sys 0m1.045s 00:24:38.604 13:19:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.604 13:19:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.604 13:19:25 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:24:38.604 13:19:25 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:24:38.604 13:19:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:38.604 13:19:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.604 13:19:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:38.604 ************************************ 00:24:38.604 START TEST raid_rebuild_test_sb_md_separate 00:24:38.604 ************************************ 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88620 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88620 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88620 ']' 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.604 13:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:38.604 [2024-12-06 13:19:25.480135] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:24:38.604 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:38.604 Zero copy mechanism will not be used. 00:24:38.604 [2024-12-06 13:19:25.480308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88620 ] 00:24:38.862 [2024-12-06 13:19:25.656303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.862 [2024-12-06 13:19:25.804464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.120 [2024-12-06 13:19:26.019993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:39.120 [2024-12-06 13:19:26.020120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.768 BaseBdev1_malloc 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.768 [2024-12-06 13:19:26.525687] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:39.768 [2024-12-06 13:19:26.525794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.768 [2024-12-06 13:19:26.525828] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:39.768 [2024-12-06 13:19:26.525847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.768 [2024-12-06 13:19:26.528611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.768 [2024-12-06 13:19:26.528664] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:39.768 BaseBdev1 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.768 BaseBdev2_malloc 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.768 [2024-12-06 13:19:26.577366] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:39.768 [2024-12-06 13:19:26.577760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.768 [2024-12-06 13:19:26.577804] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:39.768 [2024-12-06 13:19:26.577825] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.768 [2024-12-06 13:19:26.580756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.768 [2024-12-06 13:19:26.580968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:39.768 BaseBdev2 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.768 spare_malloc 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.768 spare_delay 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.768 [2024-12-06 13:19:26.648281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:39.768 [2024-12-06 13:19:26.648408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.768 [2024-12-06 13:19:26.648441] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:39.768 [2024-12-06 13:19:26.648459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.768 [2024-12-06 13:19:26.651293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.768 [2024-12-06 13:19:26.651346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:39.768 spare 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.768 [2024-12-06 13:19:26.656359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:39.768 [2024-12-06 13:19:26.659083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:39.768 [2024-12-06 13:19:26.659359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:39.768 [2024-12-06 13:19:26.659393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:39.768 [2024-12-06 13:19:26.659551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:39.768 [2024-12-06 13:19:26.659747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:39.768 [2024-12-06 13:19:26.659773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:39.768 [2024-12-06 13:19:26.659923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.768 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:39.768 "name": "raid_bdev1", 00:24:39.768 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:39.768 "strip_size_kb": 0, 00:24:39.768 "state": "online", 00:24:39.768 "raid_level": "raid1", 00:24:39.768 "superblock": true, 00:24:39.768 "num_base_bdevs": 2, 00:24:39.768 "num_base_bdevs_discovered": 2, 00:24:39.768 "num_base_bdevs_operational": 2, 00:24:39.768 "base_bdevs_list": [ 00:24:39.768 { 00:24:39.768 "name": "BaseBdev1", 00:24:39.768 "uuid": "fc4301df-f1d6-5ac2-9e2f-e83489df8d3e", 00:24:39.768 "is_configured": true, 00:24:39.768 "data_offset": 256, 00:24:39.768 "data_size": 7936 00:24:39.768 }, 00:24:39.768 { 00:24:39.768 "name": "BaseBdev2", 00:24:39.768 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:39.768 "is_configured": true, 00:24:39.768 "data_offset": 256, 00:24:39.769 "data_size": 7936 00:24:39.769 } 00:24:39.769 ] 00:24:39.769 }' 00:24:39.769 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:39.769 13:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:40.337 [2024-12-06 13:19:27.156976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:40.337 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:40.596 [2024-12-06 13:19:27.548793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:40.597 /dev/nbd0 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:40.597 1+0 records in 00:24:40.597 1+0 records out 00:24:40.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394936 s, 10.4 MB/s 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:40.597 13:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:41.529 7936+0 records in 00:24:41.529 7936+0 records out 00:24:41.529 32505856 bytes (33 MB, 31 MiB) copied, 0.933369 s, 34.8 MB/s 00:24:41.786 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:41.787 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:41.787 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:41.787 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:41.787 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:24:41.787 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:41.787 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:42.044 [2024-12-06 13:19:28.866253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:42.044 [2024-12-06 13:19:28.886642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:42.044 "name": "raid_bdev1", 00:24:42.044 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:42.044 "strip_size_kb": 0, 00:24:42.044 "state": "online", 00:24:42.044 "raid_level": "raid1", 00:24:42.044 "superblock": true, 00:24:42.044 "num_base_bdevs": 2, 00:24:42.044 "num_base_bdevs_discovered": 1, 00:24:42.044 "num_base_bdevs_operational": 1, 00:24:42.044 "base_bdevs_list": [ 00:24:42.044 { 00:24:42.044 "name": null, 00:24:42.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.044 "is_configured": false, 00:24:42.044 "data_offset": 0, 00:24:42.044 "data_size": 7936 00:24:42.044 }, 00:24:42.044 { 00:24:42.044 "name": "BaseBdev2", 00:24:42.044 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:42.044 "is_configured": true, 00:24:42.044 "data_offset": 256, 00:24:42.044 "data_size": 7936 00:24:42.044 } 00:24:42.044 ] 00:24:42.044 }' 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:42.044 13:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:42.609 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:42.609 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.609 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:42.609 [2024-12-06 13:19:29.386808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:42.609 [2024-12-06 13:19:29.401379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:24:42.609 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.609 13:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:42.609 [2024-12-06 13:19:29.404184] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:43.548 "name": "raid_bdev1", 00:24:43.548 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:43.548 "strip_size_kb": 0, 00:24:43.548 "state": "online", 00:24:43.548 "raid_level": "raid1", 00:24:43.548 "superblock": true, 00:24:43.548 "num_base_bdevs": 2, 00:24:43.548 "num_base_bdevs_discovered": 2, 00:24:43.548 "num_base_bdevs_operational": 2, 00:24:43.548 "process": { 00:24:43.548 "type": "rebuild", 00:24:43.548 "target": "spare", 00:24:43.548 "progress": { 00:24:43.548 "blocks": 2560, 00:24:43.548 "percent": 32 00:24:43.548 } 00:24:43.548 }, 00:24:43.548 "base_bdevs_list": [ 00:24:43.548 { 00:24:43.548 "name": "spare", 00:24:43.548 "uuid": "6992be94-e490-5d83-9f86-7f252f023a49", 00:24:43.548 "is_configured": true, 00:24:43.548 "data_offset": 256, 00:24:43.548 "data_size": 7936 00:24:43.548 }, 00:24:43.548 { 00:24:43.548 "name": "BaseBdev2", 00:24:43.548 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:43.548 "is_configured": true, 00:24:43.548 "data_offset": 256, 00:24:43.548 "data_size": 7936 00:24:43.548 } 00:24:43.548 ] 00:24:43.548 }' 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.548 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:43.808 [2024-12-06 13:19:30.569673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:43.808 [2024-12-06 13:19:30.616430] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:43.808 [2024-12-06 13:19:30.616592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.808 [2024-12-06 13:19:30.616618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:43.808 [2024-12-06 13:19:30.616638] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:43.808 "name": "raid_bdev1", 00:24:43.808 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:43.808 "strip_size_kb": 0, 00:24:43.808 "state": "online", 00:24:43.808 "raid_level": "raid1", 00:24:43.808 "superblock": true, 00:24:43.808 "num_base_bdevs": 2, 00:24:43.808 "num_base_bdevs_discovered": 1, 00:24:43.808 "num_base_bdevs_operational": 1, 00:24:43.808 "base_bdevs_list": [ 00:24:43.808 { 00:24:43.808 "name": null, 00:24:43.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.808 "is_configured": false, 00:24:43.808 "data_offset": 0, 00:24:43.808 "data_size": 7936 00:24:43.808 }, 00:24:43.808 { 00:24:43.808 "name": "BaseBdev2", 00:24:43.808 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:43.808 "is_configured": true, 00:24:43.808 "data_offset": 256, 00:24:43.808 "data_size": 7936 00:24:43.808 } 00:24:43.808 ] 00:24:43.808 }' 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:43.808 13:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:44.377 "name": "raid_bdev1", 00:24:44.377 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:44.377 "strip_size_kb": 0, 00:24:44.377 "state": "online", 00:24:44.377 "raid_level": "raid1", 00:24:44.377 "superblock": true, 00:24:44.377 "num_base_bdevs": 2, 00:24:44.377 "num_base_bdevs_discovered": 1, 00:24:44.377 "num_base_bdevs_operational": 1, 00:24:44.377 "base_bdevs_list": [ 00:24:44.377 { 00:24:44.377 "name": null, 00:24:44.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.377 "is_configured": false, 00:24:44.377 "data_offset": 0, 00:24:44.377 "data_size": 7936 00:24:44.377 }, 00:24:44.377 { 00:24:44.377 "name": "BaseBdev2", 00:24:44.377 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:44.377 "is_configured": true, 00:24:44.377 "data_offset": 256, 00:24:44.377 "data_size": 7936 00:24:44.377 } 00:24:44.377 ] 00:24:44.377 }' 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:44.377 [2024-12-06 13:19:31.340306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:44.377 [2024-12-06 13:19:31.354125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.377 13:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:44.377 [2024-12-06 13:19:31.356970] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:45.757 "name": "raid_bdev1", 00:24:45.757 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:45.757 "strip_size_kb": 0, 00:24:45.757 "state": "online", 00:24:45.757 "raid_level": "raid1", 00:24:45.757 "superblock": true, 00:24:45.757 "num_base_bdevs": 2, 00:24:45.757 "num_base_bdevs_discovered": 2, 00:24:45.757 "num_base_bdevs_operational": 2, 00:24:45.757 "process": { 00:24:45.757 "type": "rebuild", 00:24:45.757 "target": "spare", 00:24:45.757 "progress": { 00:24:45.757 "blocks": 2560, 00:24:45.757 "percent": 32 00:24:45.757 } 00:24:45.757 }, 00:24:45.757 "base_bdevs_list": [ 00:24:45.757 { 00:24:45.757 "name": "spare", 00:24:45.757 "uuid": "6992be94-e490-5d83-9f86-7f252f023a49", 00:24:45.757 "is_configured": true, 00:24:45.757 "data_offset": 256, 00:24:45.757 "data_size": 7936 00:24:45.757 }, 00:24:45.757 { 00:24:45.757 "name": "BaseBdev2", 00:24:45.757 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:45.757 "is_configured": true, 00:24:45.757 "data_offset": 256, 00:24:45.757 "data_size": 7936 00:24:45.757 } 00:24:45.757 ] 00:24:45.757 }' 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:45.757 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=783 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.757 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:45.757 "name": "raid_bdev1", 00:24:45.757 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:45.757 "strip_size_kb": 0, 00:24:45.757 "state": "online", 00:24:45.757 "raid_level": "raid1", 00:24:45.757 "superblock": true, 00:24:45.757 "num_base_bdevs": 2, 00:24:45.757 "num_base_bdevs_discovered": 2, 00:24:45.757 "num_base_bdevs_operational": 2, 00:24:45.757 "process": { 00:24:45.757 "type": "rebuild", 00:24:45.757 "target": "spare", 00:24:45.757 "progress": { 00:24:45.757 "blocks": 2816, 00:24:45.757 "percent": 35 00:24:45.757 } 00:24:45.757 }, 00:24:45.757 "base_bdevs_list": [ 00:24:45.757 { 00:24:45.757 "name": "spare", 00:24:45.757 "uuid": "6992be94-e490-5d83-9f86-7f252f023a49", 00:24:45.758 "is_configured": true, 00:24:45.758 "data_offset": 256, 00:24:45.758 "data_size": 7936 00:24:45.758 }, 00:24:45.758 { 00:24:45.758 "name": "BaseBdev2", 00:24:45.758 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:45.758 "is_configured": true, 00:24:45.758 "data_offset": 256, 00:24:45.758 "data_size": 7936 00:24:45.758 } 00:24:45.758 ] 00:24:45.758 }' 00:24:45.758 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:45.758 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:45.758 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:45.758 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.758 13:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:46.694 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:46.695 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:46.695 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:46.695 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:46.695 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:46.695 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:46.695 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:46.695 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.695 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.695 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:46.695 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.953 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:46.953 "name": "raid_bdev1", 00:24:46.953 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:46.953 "strip_size_kb": 0, 00:24:46.953 "state": "online", 00:24:46.953 "raid_level": "raid1", 00:24:46.953 "superblock": true, 00:24:46.953 "num_base_bdevs": 2, 00:24:46.953 "num_base_bdevs_discovered": 2, 00:24:46.953 "num_base_bdevs_operational": 2, 00:24:46.953 "process": { 00:24:46.953 "type": "rebuild", 00:24:46.953 "target": "spare", 00:24:46.953 "progress": { 00:24:46.953 "blocks": 5888, 00:24:46.953 "percent": 74 00:24:46.953 } 00:24:46.953 }, 00:24:46.953 "base_bdevs_list": [ 00:24:46.953 { 00:24:46.953 "name": "spare", 00:24:46.953 "uuid": "6992be94-e490-5d83-9f86-7f252f023a49", 00:24:46.953 "is_configured": true, 00:24:46.953 "data_offset": 256, 00:24:46.953 "data_size": 7936 00:24:46.953 }, 00:24:46.953 { 00:24:46.953 "name": "BaseBdev2", 00:24:46.953 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:46.953 "is_configured": true, 00:24:46.953 "data_offset": 256, 00:24:46.953 "data_size": 7936 00:24:46.953 } 00:24:46.953 ] 00:24:46.953 }' 00:24:46.953 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:46.953 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:46.953 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:46.953 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:46.953 13:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:47.522 [2024-12-06 13:19:34.487667] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:47.522 [2024-12-06 13:19:34.487790] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:47.522 [2024-12-06 13:19:34.488001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:48.089 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:48.090 "name": "raid_bdev1", 00:24:48.090 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:48.090 "strip_size_kb": 0, 00:24:48.090 "state": "online", 00:24:48.090 "raid_level": "raid1", 00:24:48.090 "superblock": true, 00:24:48.090 "num_base_bdevs": 2, 00:24:48.090 "num_base_bdevs_discovered": 2, 00:24:48.090 "num_base_bdevs_operational": 2, 00:24:48.090 "base_bdevs_list": [ 00:24:48.090 { 00:24:48.090 "name": "spare", 00:24:48.090 "uuid": "6992be94-e490-5d83-9f86-7f252f023a49", 00:24:48.090 "is_configured": true, 00:24:48.090 "data_offset": 256, 00:24:48.090 "data_size": 7936 00:24:48.090 }, 00:24:48.090 { 00:24:48.090 "name": "BaseBdev2", 00:24:48.090 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:48.090 "is_configured": true, 00:24:48.090 "data_offset": 256, 00:24:48.090 "data_size": 7936 00:24:48.090 } 00:24:48.090 ] 00:24:48.090 }' 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:48.090 13:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:48.090 "name": "raid_bdev1", 00:24:48.090 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:48.090 "strip_size_kb": 0, 00:24:48.090 "state": "online", 00:24:48.090 "raid_level": "raid1", 00:24:48.090 "superblock": true, 00:24:48.090 "num_base_bdevs": 2, 00:24:48.090 "num_base_bdevs_discovered": 2, 00:24:48.090 "num_base_bdevs_operational": 2, 00:24:48.090 "base_bdevs_list": [ 00:24:48.090 { 00:24:48.090 "name": "spare", 00:24:48.090 "uuid": "6992be94-e490-5d83-9f86-7f252f023a49", 00:24:48.090 "is_configured": true, 00:24:48.090 "data_offset": 256, 00:24:48.090 "data_size": 7936 00:24:48.090 }, 00:24:48.090 { 00:24:48.090 "name": "BaseBdev2", 00:24:48.090 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:48.090 "is_configured": true, 00:24:48.090 "data_offset": 256, 00:24:48.090 "data_size": 7936 00:24:48.090 } 00:24:48.090 ] 00:24:48.090 }' 00:24:48.090 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:48.349 "name": "raid_bdev1", 00:24:48.349 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:48.349 "strip_size_kb": 0, 00:24:48.349 "state": "online", 00:24:48.349 "raid_level": "raid1", 00:24:48.349 "superblock": true, 00:24:48.349 "num_base_bdevs": 2, 00:24:48.349 "num_base_bdevs_discovered": 2, 00:24:48.349 "num_base_bdevs_operational": 2, 00:24:48.349 "base_bdevs_list": [ 00:24:48.349 { 00:24:48.349 "name": "spare", 00:24:48.349 "uuid": "6992be94-e490-5d83-9f86-7f252f023a49", 00:24:48.349 "is_configured": true, 00:24:48.349 "data_offset": 256, 00:24:48.349 "data_size": 7936 00:24:48.349 }, 00:24:48.349 { 00:24:48.349 "name": "BaseBdev2", 00:24:48.349 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:48.349 "is_configured": true, 00:24:48.349 "data_offset": 256, 00:24:48.349 "data_size": 7936 00:24:48.349 } 00:24:48.349 ] 00:24:48.349 }' 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:48.349 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.916 [2024-12-06 13:19:35.683024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:48.916 [2024-12-06 13:19:35.683123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:48.916 [2024-12-06 13:19:35.683252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:48.916 [2024-12-06 13:19:35.683380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:48.916 [2024-12-06 13:19:35.683405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:48.916 13:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:49.174 /dev/nbd0 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.174 1+0 records in 00:24:49.174 1+0 records out 00:24:49.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278074 s, 14.7 MB/s 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:49.174 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:49.431 /dev/nbd1 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.431 1+0 records in 00:24:49.431 1+0 records out 00:24:49.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475957 s, 8.6 MB/s 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:49.431 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:49.688 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:49.688 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:49.688 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:49.688 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:49.688 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:24:49.688 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:49.688 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:49.946 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:49.946 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:49.946 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:49.946 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:49.946 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:49.946 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:49.946 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:24:49.946 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:24:49.946 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:49.946 13:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.204 [2024-12-06 13:19:37.193847] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:50.204 [2024-12-06 13:19:37.193960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.204 [2024-12-06 13:19:37.193999] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:50.204 [2024-12-06 13:19:37.194014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.204 [2024-12-06 13:19:37.196954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.204 [2024-12-06 13:19:37.196998] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:50.204 [2024-12-06 13:19:37.197099] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:50.204 [2024-12-06 13:19:37.197174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:50.204 [2024-12-06 13:19:37.197353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:50.204 spare 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.204 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.462 [2024-12-06 13:19:37.297537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:50.462 [2024-12-06 13:19:37.297637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:50.462 [2024-12-06 13:19:37.297844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:24:50.462 [2024-12-06 13:19:37.298120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:50.462 [2024-12-06 13:19:37.298145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:50.462 [2024-12-06 13:19:37.298363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:50.462 "name": "raid_bdev1", 00:24:50.462 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:50.462 "strip_size_kb": 0, 00:24:50.462 "state": "online", 00:24:50.462 "raid_level": "raid1", 00:24:50.462 "superblock": true, 00:24:50.462 "num_base_bdevs": 2, 00:24:50.462 "num_base_bdevs_discovered": 2, 00:24:50.462 "num_base_bdevs_operational": 2, 00:24:50.462 "base_bdevs_list": [ 00:24:50.462 { 00:24:50.462 "name": "spare", 00:24:50.462 "uuid": "6992be94-e490-5d83-9f86-7f252f023a49", 00:24:50.462 "is_configured": true, 00:24:50.462 "data_offset": 256, 00:24:50.462 "data_size": 7936 00:24:50.462 }, 00:24:50.462 { 00:24:50.462 "name": "BaseBdev2", 00:24:50.462 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:50.462 "is_configured": true, 00:24:50.462 "data_offset": 256, 00:24:50.462 "data_size": 7936 00:24:50.462 } 00:24:50.462 ] 00:24:50.462 }' 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:50.462 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:51.028 "name": "raid_bdev1", 00:24:51.028 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:51.028 "strip_size_kb": 0, 00:24:51.028 "state": "online", 00:24:51.028 "raid_level": "raid1", 00:24:51.028 "superblock": true, 00:24:51.028 "num_base_bdevs": 2, 00:24:51.028 "num_base_bdevs_discovered": 2, 00:24:51.028 "num_base_bdevs_operational": 2, 00:24:51.028 "base_bdevs_list": [ 00:24:51.028 { 00:24:51.028 "name": "spare", 00:24:51.028 "uuid": "6992be94-e490-5d83-9f86-7f252f023a49", 00:24:51.028 "is_configured": true, 00:24:51.028 "data_offset": 256, 00:24:51.028 "data_size": 7936 00:24:51.028 }, 00:24:51.028 { 00:24:51.028 "name": "BaseBdev2", 00:24:51.028 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:51.028 "is_configured": true, 00:24:51.028 "data_offset": 256, 00:24:51.028 "data_size": 7936 00:24:51.028 } 00:24:51.028 ] 00:24:51.028 }' 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.028 13:19:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.028 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.028 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:51.028 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.028 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.286 [2024-12-06 13:19:38.046684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:51.286 "name": "raid_bdev1", 00:24:51.286 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:51.286 "strip_size_kb": 0, 00:24:51.286 "state": "online", 00:24:51.286 "raid_level": "raid1", 00:24:51.286 "superblock": true, 00:24:51.286 "num_base_bdevs": 2, 00:24:51.286 "num_base_bdevs_discovered": 1, 00:24:51.286 "num_base_bdevs_operational": 1, 00:24:51.286 "base_bdevs_list": [ 00:24:51.286 { 00:24:51.286 "name": null, 00:24:51.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.286 "is_configured": false, 00:24:51.286 "data_offset": 0, 00:24:51.286 "data_size": 7936 00:24:51.286 }, 00:24:51.286 { 00:24:51.286 "name": "BaseBdev2", 00:24:51.286 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:51.286 "is_configured": true, 00:24:51.286 "data_offset": 256, 00:24:51.286 "data_size": 7936 00:24:51.286 } 00:24:51.286 ] 00:24:51.286 }' 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:51.286 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.587 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:51.587 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.587 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:51.587 [2024-12-06 13:19:38.562921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:51.587 [2024-12-06 13:19:38.563617] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:51.587 [2024-12-06 13:19:38.563653] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:51.587 [2024-12-06 13:19:38.563741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:51.587 [2024-12-06 13:19:38.576764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:24:51.587 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.587 13:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:51.587 [2024-12-06 13:19:38.579609] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:52.993 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:52.993 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:52.993 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:52.993 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:52.993 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:52.993 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.993 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.993 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.993 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.993 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.993 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:52.993 "name": "raid_bdev1", 00:24:52.993 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:52.993 "strip_size_kb": 0, 00:24:52.993 "state": "online", 00:24:52.993 "raid_level": "raid1", 00:24:52.993 "superblock": true, 00:24:52.994 "num_base_bdevs": 2, 00:24:52.994 "num_base_bdevs_discovered": 2, 00:24:52.994 "num_base_bdevs_operational": 2, 00:24:52.994 "process": { 00:24:52.994 "type": "rebuild", 00:24:52.994 "target": "spare", 00:24:52.994 "progress": { 00:24:52.994 "blocks": 2560, 00:24:52.994 "percent": 32 00:24:52.994 } 00:24:52.994 }, 00:24:52.994 "base_bdevs_list": [ 00:24:52.994 { 00:24:52.994 "name": "spare", 00:24:52.994 "uuid": "6992be94-e490-5d83-9f86-7f252f023a49", 00:24:52.994 "is_configured": true, 00:24:52.994 "data_offset": 256, 00:24:52.994 "data_size": 7936 00:24:52.994 }, 00:24:52.994 { 00:24:52.994 "name": "BaseBdev2", 00:24:52.994 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:52.994 "is_configured": true, 00:24:52.994 "data_offset": 256, 00:24:52.994 "data_size": 7936 00:24:52.994 } 00:24:52.994 ] 00:24:52.994 }' 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.994 [2024-12-06 13:19:39.750058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:52.994 [2024-12-06 13:19:39.791735] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:52.994 [2024-12-06 13:19:39.792245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.994 [2024-12-06 13:19:39.792277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:52.994 [2024-12-06 13:19:39.792317] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:52.994 "name": "raid_bdev1", 00:24:52.994 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:52.994 "strip_size_kb": 0, 00:24:52.994 "state": "online", 00:24:52.994 "raid_level": "raid1", 00:24:52.994 "superblock": true, 00:24:52.994 "num_base_bdevs": 2, 00:24:52.994 "num_base_bdevs_discovered": 1, 00:24:52.994 "num_base_bdevs_operational": 1, 00:24:52.994 "base_bdevs_list": [ 00:24:52.994 { 00:24:52.994 "name": null, 00:24:52.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.994 "is_configured": false, 00:24:52.994 "data_offset": 0, 00:24:52.994 "data_size": 7936 00:24:52.994 }, 00:24:52.994 { 00:24:52.994 "name": "BaseBdev2", 00:24:52.994 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:52.994 "is_configured": true, 00:24:52.994 "data_offset": 256, 00:24:52.994 "data_size": 7936 00:24:52.994 } 00:24:52.994 ] 00:24:52.994 }' 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:52.994 13:19:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:53.559 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:53.559 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.559 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:53.559 [2024-12-06 13:19:40.307649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:53.559 [2024-12-06 13:19:40.307751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.559 [2024-12-06 13:19:40.307793] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:53.559 [2024-12-06 13:19:40.307828] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.559 [2024-12-06 13:19:40.308202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.559 [2024-12-06 13:19:40.308233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:53.559 [2024-12-06 13:19:40.308359] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:53.559 [2024-12-06 13:19:40.308386] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:53.559 [2024-12-06 13:19:40.308399] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:53.559 [2024-12-06 13:19:40.308434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:53.559 spare 00:24:53.559 [2024-12-06 13:19:40.320809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:24:53.559 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.559 13:19:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:53.559 [2024-12-06 13:19:40.323270] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:54.491 "name": "raid_bdev1", 00:24:54.491 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:54.491 "strip_size_kb": 0, 00:24:54.491 "state": "online", 00:24:54.491 "raid_level": "raid1", 00:24:54.491 "superblock": true, 00:24:54.491 "num_base_bdevs": 2, 00:24:54.491 "num_base_bdevs_discovered": 2, 00:24:54.491 "num_base_bdevs_operational": 2, 00:24:54.491 "process": { 00:24:54.491 "type": "rebuild", 00:24:54.491 "target": "spare", 00:24:54.491 "progress": { 00:24:54.491 "blocks": 2560, 00:24:54.491 "percent": 32 00:24:54.491 } 00:24:54.491 }, 00:24:54.491 "base_bdevs_list": [ 00:24:54.491 { 00:24:54.491 "name": "spare", 00:24:54.491 "uuid": "6992be94-e490-5d83-9f86-7f252f023a49", 00:24:54.491 "is_configured": true, 00:24:54.491 "data_offset": 256, 00:24:54.491 "data_size": 7936 00:24:54.491 }, 00:24:54.491 { 00:24:54.491 "name": "BaseBdev2", 00:24:54.491 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:54.491 "is_configured": true, 00:24:54.491 "data_offset": 256, 00:24:54.491 "data_size": 7936 00:24:54.491 } 00:24:54.491 ] 00:24:54.491 }' 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.491 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.491 [2024-12-06 13:19:41.494629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:54.749 [2024-12-06 13:19:41.535520] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:54.749 [2024-12-06 13:19:41.536027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.749 [2024-12-06 13:19:41.536067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:54.749 [2024-12-06 13:19:41.536081] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.749 "name": "raid_bdev1", 00:24:54.749 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:54.749 "strip_size_kb": 0, 00:24:54.749 "state": "online", 00:24:54.749 "raid_level": "raid1", 00:24:54.749 "superblock": true, 00:24:54.749 "num_base_bdevs": 2, 00:24:54.749 "num_base_bdevs_discovered": 1, 00:24:54.749 "num_base_bdevs_operational": 1, 00:24:54.749 "base_bdevs_list": [ 00:24:54.749 { 00:24:54.749 "name": null, 00:24:54.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.749 "is_configured": false, 00:24:54.749 "data_offset": 0, 00:24:54.749 "data_size": 7936 00:24:54.749 }, 00:24:54.749 { 00:24:54.749 "name": "BaseBdev2", 00:24:54.749 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:54.749 "is_configured": true, 00:24:54.749 "data_offset": 256, 00:24:54.749 "data_size": 7936 00:24:54.749 } 00:24:54.749 ] 00:24:54.749 }' 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.749 13:19:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:55.313 "name": "raid_bdev1", 00:24:55.313 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:55.313 "strip_size_kb": 0, 00:24:55.313 "state": "online", 00:24:55.313 "raid_level": "raid1", 00:24:55.313 "superblock": true, 00:24:55.313 "num_base_bdevs": 2, 00:24:55.313 "num_base_bdevs_discovered": 1, 00:24:55.313 "num_base_bdevs_operational": 1, 00:24:55.313 "base_bdevs_list": [ 00:24:55.313 { 00:24:55.313 "name": null, 00:24:55.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.313 "is_configured": false, 00:24:55.313 "data_offset": 0, 00:24:55.313 "data_size": 7936 00:24:55.313 }, 00:24:55.313 { 00:24:55.313 "name": "BaseBdev2", 00:24:55.313 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:55.313 "is_configured": true, 00:24:55.313 "data_offset": 256, 00:24:55.313 "data_size": 7936 00:24:55.313 } 00:24:55.313 ] 00:24:55.313 }' 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:55.313 [2024-12-06 13:19:42.283944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:55.313 [2024-12-06 13:19:42.284215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.313 [2024-12-06 13:19:42.284297] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:55.313 [2024-12-06 13:19:42.284319] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.313 [2024-12-06 13:19:42.284673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.313 [2024-12-06 13:19:42.284699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:55.313 [2024-12-06 13:19:42.284781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:55.313 [2024-12-06 13:19:42.284803] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:55.313 [2024-12-06 13:19:42.284818] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:55.313 [2024-12-06 13:19:42.284833] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:55.313 BaseBdev1 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.313 13:19:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.686 "name": "raid_bdev1", 00:24:56.686 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:56.686 "strip_size_kb": 0, 00:24:56.686 "state": "online", 00:24:56.686 "raid_level": "raid1", 00:24:56.686 "superblock": true, 00:24:56.686 "num_base_bdevs": 2, 00:24:56.686 "num_base_bdevs_discovered": 1, 00:24:56.686 "num_base_bdevs_operational": 1, 00:24:56.686 "base_bdevs_list": [ 00:24:56.686 { 00:24:56.686 "name": null, 00:24:56.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.686 "is_configured": false, 00:24:56.686 "data_offset": 0, 00:24:56.686 "data_size": 7936 00:24:56.686 }, 00:24:56.686 { 00:24:56.686 "name": "BaseBdev2", 00:24:56.686 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:56.686 "is_configured": true, 00:24:56.686 "data_offset": 256, 00:24:56.686 "data_size": 7936 00:24:56.686 } 00:24:56.686 ] 00:24:56.686 }' 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.686 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.945 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:56.945 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:56.945 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:56.945 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:56.945 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:56.945 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.945 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.945 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:56.945 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.945 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.945 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:56.945 "name": "raid_bdev1", 00:24:56.945 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:56.945 "strip_size_kb": 0, 00:24:56.945 "state": "online", 00:24:56.946 "raid_level": "raid1", 00:24:56.946 "superblock": true, 00:24:56.946 "num_base_bdevs": 2, 00:24:56.946 "num_base_bdevs_discovered": 1, 00:24:56.946 "num_base_bdevs_operational": 1, 00:24:56.946 "base_bdevs_list": [ 00:24:56.946 { 00:24:56.946 "name": null, 00:24:56.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.946 "is_configured": false, 00:24:56.946 "data_offset": 0, 00:24:56.946 "data_size": 7936 00:24:56.946 }, 00:24:56.946 { 00:24:56.946 "name": "BaseBdev2", 00:24:56.946 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:56.946 "is_configured": true, 00:24:56.946 "data_offset": 256, 00:24:56.946 "data_size": 7936 00:24:56.946 } 00:24:56.946 ] 00:24:56.946 }' 00:24:56.946 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:56.946 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:56.946 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:57.205 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:57.205 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:57.205 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:24:57.205 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:57.205 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:57.205 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.205 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:57.205 13:19:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:57.205 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:57.205 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.205 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:57.205 [2024-12-06 13:19:44.008468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:57.205 request: 00:24:57.205 { 00:24:57.205 "base_bdev": "BaseBdev1", 00:24:57.205 "raid_bdev": "raid_bdev1", 00:24:57.205 [2024-12-06 13:19:44.009762] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:57.205 [2024-12-06 13:19:44.009849] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:57.205 "method": "bdev_raid_add_base_bdev", 00:24:57.205 "req_id": 1 00:24:57.205 } 00:24:57.205 Got JSON-RPC error response 00:24:57.205 response: 00:24:57.205 { 00:24:57.205 "code": -22, 00:24:57.205 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:57.205 } 00:24:57.205 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:57.225 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:24:57.225 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:57.225 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:57.225 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:57.225 13:19:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.162 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:58.162 "name": "raid_bdev1", 00:24:58.162 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:58.162 "strip_size_kb": 0, 00:24:58.162 "state": "online", 00:24:58.162 "raid_level": "raid1", 00:24:58.162 "superblock": true, 00:24:58.162 "num_base_bdevs": 2, 00:24:58.162 "num_base_bdevs_discovered": 1, 00:24:58.162 "num_base_bdevs_operational": 1, 00:24:58.162 "base_bdevs_list": [ 00:24:58.162 { 00:24:58.162 "name": null, 00:24:58.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.162 "is_configured": false, 00:24:58.163 "data_offset": 0, 00:24:58.163 "data_size": 7936 00:24:58.163 }, 00:24:58.163 { 00:24:58.163 "name": "BaseBdev2", 00:24:58.163 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:58.163 "is_configured": true, 00:24:58.163 "data_offset": 256, 00:24:58.163 "data_size": 7936 00:24:58.163 } 00:24:58.163 ] 00:24:58.163 }' 00:24:58.163 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:58.163 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:58.729 "name": "raid_bdev1", 00:24:58.729 "uuid": "5adadd3d-f345-4c1f-bcf1-b1aeb0d9c50b", 00:24:58.729 "strip_size_kb": 0, 00:24:58.729 "state": "online", 00:24:58.729 "raid_level": "raid1", 00:24:58.729 "superblock": true, 00:24:58.729 "num_base_bdevs": 2, 00:24:58.729 "num_base_bdevs_discovered": 1, 00:24:58.729 "num_base_bdevs_operational": 1, 00:24:58.729 "base_bdevs_list": [ 00:24:58.729 { 00:24:58.729 "name": null, 00:24:58.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.729 "is_configured": false, 00:24:58.729 "data_offset": 0, 00:24:58.729 "data_size": 7936 00:24:58.729 }, 00:24:58.729 { 00:24:58.729 "name": "BaseBdev2", 00:24:58.729 "uuid": "d817701c-e4ff-5da0-8a5f-4c92375a48cb", 00:24:58.729 "is_configured": true, 00:24:58.729 "data_offset": 256, 00:24:58.729 "data_size": 7936 00:24:58.729 } 00:24:58.729 ] 00:24:58.729 }' 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88620 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88620 ']' 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88620 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88620 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:58.729 killing process with pid 88620 00:24:58.729 Received shutdown signal, test time was about 60.000000 seconds 00:24:58.729 00:24:58.729 Latency(us) 00:24:58.729 [2024-12-06T13:19:45.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.729 [2024-12-06T13:19:45.745Z] =================================================================================================================== 00:24:58.729 [2024-12-06T13:19:45.745Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:58.729 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:58.730 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88620' 00:24:58.730 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88620 00:24:58.730 [2024-12-06 13:19:45.740331] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:58.730 13:19:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88620 00:24:58.730 [2024-12-06 13:19:45.740539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:58.730 [2024-12-06 13:19:45.740613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:58.730 [2024-12-06 13:19:45.740633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:59.377 [2024-12-06 13:19:46.021318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:00.314 13:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:25:00.314 00:25:00.315 real 0m21.699s 00:25:00.315 user 0m29.371s 00:25:00.315 sys 0m2.652s 00:25:00.315 13:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.315 ************************************ 00:25:00.315 END TEST raid_rebuild_test_sb_md_separate 00:25:00.315 ************************************ 00:25:00.315 13:19:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:25:00.315 13:19:47 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:25:00.315 13:19:47 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:25:00.315 13:19:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:00.315 13:19:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.315 13:19:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:00.315 ************************************ 00:25:00.315 START TEST raid_state_function_test_sb_md_interleaved 00:25:00.315 ************************************ 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89329 00:25:00.315 Process raid pid: 89329 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89329' 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89329 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89329 ']' 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.315 13:19:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:00.315 [2024-12-06 13:19:47.244692] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:25:00.315 [2024-12-06 13:19:47.244884] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.573 [2024-12-06 13:19:47.421668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.573 [2024-12-06 13:19:47.555150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.831 [2024-12-06 13:19:47.765480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:00.831 [2024-12-06 13:19:47.765546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:01.399 [2024-12-06 13:19:48.252676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:01.399 [2024-12-06 13:19:48.252769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:01.399 [2024-12-06 13:19:48.252802] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:01.399 [2024-12-06 13:19:48.252827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.399 "name": "Existed_Raid", 00:25:01.399 "uuid": "00fa9a70-04f2-4163-b183-04ebd4a5f91d", 00:25:01.399 "strip_size_kb": 0, 00:25:01.399 "state": "configuring", 00:25:01.399 "raid_level": "raid1", 00:25:01.399 "superblock": true, 00:25:01.399 "num_base_bdevs": 2, 00:25:01.399 "num_base_bdevs_discovered": 0, 00:25:01.399 "num_base_bdevs_operational": 2, 00:25:01.399 "base_bdevs_list": [ 00:25:01.399 { 00:25:01.399 "name": "BaseBdev1", 00:25:01.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.399 "is_configured": false, 00:25:01.399 "data_offset": 0, 00:25:01.399 "data_size": 0 00:25:01.399 }, 00:25:01.399 { 00:25:01.399 "name": "BaseBdev2", 00:25:01.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.399 "is_configured": false, 00:25:01.399 "data_offset": 0, 00:25:01.399 "data_size": 0 00:25:01.399 } 00:25:01.399 ] 00:25:01.399 }' 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.399 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:01.965 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:01.965 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.965 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:01.965 [2024-12-06 13:19:48.748769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:01.965 [2024-12-06 13:19:48.748814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:01.965 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.965 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:01.965 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.965 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:01.965 [2024-12-06 13:19:48.756757] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:01.965 [2024-12-06 13:19:48.756837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:01.965 [2024-12-06 13:19:48.756877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:01.965 [2024-12-06 13:19:48.756896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:01.965 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.965 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:25:01.965 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.965 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:01.966 [2024-12-06 13:19:48.803066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:01.966 BaseBdev1 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:01.966 [ 00:25:01.966 { 00:25:01.966 "name": "BaseBdev1", 00:25:01.966 "aliases": [ 00:25:01.966 "2a254155-659e-4e5d-a8be-1be8f741c8c5" 00:25:01.966 ], 00:25:01.966 "product_name": "Malloc disk", 00:25:01.966 "block_size": 4128, 00:25:01.966 "num_blocks": 8192, 00:25:01.966 "uuid": "2a254155-659e-4e5d-a8be-1be8f741c8c5", 00:25:01.966 "md_size": 32, 00:25:01.966 "md_interleave": true, 00:25:01.966 "dif_type": 0, 00:25:01.966 "assigned_rate_limits": { 00:25:01.966 "rw_ios_per_sec": 0, 00:25:01.966 "rw_mbytes_per_sec": 0, 00:25:01.966 "r_mbytes_per_sec": 0, 00:25:01.966 "w_mbytes_per_sec": 0 00:25:01.966 }, 00:25:01.966 "claimed": true, 00:25:01.966 "claim_type": "exclusive_write", 00:25:01.966 "zoned": false, 00:25:01.966 "supported_io_types": { 00:25:01.966 "read": true, 00:25:01.966 "write": true, 00:25:01.966 "unmap": true, 00:25:01.966 "flush": true, 00:25:01.966 "reset": true, 00:25:01.966 "nvme_admin": false, 00:25:01.966 "nvme_io": false, 00:25:01.966 "nvme_io_md": false, 00:25:01.966 "write_zeroes": true, 00:25:01.966 "zcopy": true, 00:25:01.966 "get_zone_info": false, 00:25:01.966 "zone_management": false, 00:25:01.966 "zone_append": false, 00:25:01.966 "compare": false, 00:25:01.966 "compare_and_write": false, 00:25:01.966 "abort": true, 00:25:01.966 "seek_hole": false, 00:25:01.966 "seek_data": false, 00:25:01.966 "copy": true, 00:25:01.966 "nvme_iov_md": false 00:25:01.966 }, 00:25:01.966 "memory_domains": [ 00:25:01.966 { 00:25:01.966 "dma_device_id": "system", 00:25:01.966 "dma_device_type": 1 00:25:01.966 }, 00:25:01.966 { 00:25:01.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.966 "dma_device_type": 2 00:25:01.966 } 00:25:01.966 ], 00:25:01.966 "driver_specific": {} 00:25:01.966 } 00:25:01.966 ] 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.966 "name": "Existed_Raid", 00:25:01.966 "uuid": "11caa705-9031-418f-b92b-b00b6a0a5c58", 00:25:01.966 "strip_size_kb": 0, 00:25:01.966 "state": "configuring", 00:25:01.966 "raid_level": "raid1", 00:25:01.966 "superblock": true, 00:25:01.966 "num_base_bdevs": 2, 00:25:01.966 "num_base_bdevs_discovered": 1, 00:25:01.966 "num_base_bdevs_operational": 2, 00:25:01.966 "base_bdevs_list": [ 00:25:01.966 { 00:25:01.966 "name": "BaseBdev1", 00:25:01.966 "uuid": "2a254155-659e-4e5d-a8be-1be8f741c8c5", 00:25:01.966 "is_configured": true, 00:25:01.966 "data_offset": 256, 00:25:01.966 "data_size": 7936 00:25:01.966 }, 00:25:01.966 { 00:25:01.966 "name": "BaseBdev2", 00:25:01.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.966 "is_configured": false, 00:25:01.966 "data_offset": 0, 00:25:01.966 "data_size": 0 00:25:01.966 } 00:25:01.966 ] 00:25:01.966 }' 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.966 13:19:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:02.535 [2024-12-06 13:19:49.351342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:02.535 [2024-12-06 13:19:49.351459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:02.535 [2024-12-06 13:19:49.359428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:02.535 [2024-12-06 13:19:49.362273] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:02.535 [2024-12-06 13:19:49.362374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.535 "name": "Existed_Raid", 00:25:02.535 "uuid": "3dfcbbde-6c3f-449e-961b-82094dc9a30e", 00:25:02.535 "strip_size_kb": 0, 00:25:02.535 "state": "configuring", 00:25:02.535 "raid_level": "raid1", 00:25:02.535 "superblock": true, 00:25:02.535 "num_base_bdevs": 2, 00:25:02.535 "num_base_bdevs_discovered": 1, 00:25:02.535 "num_base_bdevs_operational": 2, 00:25:02.535 "base_bdevs_list": [ 00:25:02.535 { 00:25:02.535 "name": "BaseBdev1", 00:25:02.535 "uuid": "2a254155-659e-4e5d-a8be-1be8f741c8c5", 00:25:02.535 "is_configured": true, 00:25:02.535 "data_offset": 256, 00:25:02.535 "data_size": 7936 00:25:02.535 }, 00:25:02.535 { 00:25:02.535 "name": "BaseBdev2", 00:25:02.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.535 "is_configured": false, 00:25:02.535 "data_offset": 0, 00:25:02.535 "data_size": 0 00:25:02.535 } 00:25:02.535 ] 00:25:02.535 }' 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.535 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:03.102 [2024-12-06 13:19:49.906709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:03.102 [2024-12-06 13:19:49.906981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:03.102 [2024-12-06 13:19:49.907000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:03.102 [2024-12-06 13:19:49.907105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:03.102 [2024-12-06 13:19:49.907205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:03.102 [2024-12-06 13:19:49.907223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:03.102 [2024-12-06 13:19:49.907305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.102 BaseBdev2 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:03.102 [ 00:25:03.102 { 00:25:03.102 "name": "BaseBdev2", 00:25:03.102 "aliases": [ 00:25:03.102 "b46adbf6-3495-4467-a6b5-e75827cd4776" 00:25:03.102 ], 00:25:03.102 "product_name": "Malloc disk", 00:25:03.102 "block_size": 4128, 00:25:03.102 "num_blocks": 8192, 00:25:03.102 "uuid": "b46adbf6-3495-4467-a6b5-e75827cd4776", 00:25:03.102 "md_size": 32, 00:25:03.102 "md_interleave": true, 00:25:03.102 "dif_type": 0, 00:25:03.102 "assigned_rate_limits": { 00:25:03.102 "rw_ios_per_sec": 0, 00:25:03.102 "rw_mbytes_per_sec": 0, 00:25:03.102 "r_mbytes_per_sec": 0, 00:25:03.102 "w_mbytes_per_sec": 0 00:25:03.102 }, 00:25:03.102 "claimed": true, 00:25:03.102 "claim_type": "exclusive_write", 00:25:03.102 "zoned": false, 00:25:03.102 "supported_io_types": { 00:25:03.102 "read": true, 00:25:03.102 "write": true, 00:25:03.102 "unmap": true, 00:25:03.102 "flush": true, 00:25:03.102 "reset": true, 00:25:03.102 "nvme_admin": false, 00:25:03.102 "nvme_io": false, 00:25:03.102 "nvme_io_md": false, 00:25:03.102 "write_zeroes": true, 00:25:03.102 "zcopy": true, 00:25:03.102 "get_zone_info": false, 00:25:03.102 "zone_management": false, 00:25:03.102 "zone_append": false, 00:25:03.102 "compare": false, 00:25:03.102 "compare_and_write": false, 00:25:03.102 "abort": true, 00:25:03.102 "seek_hole": false, 00:25:03.102 "seek_data": false, 00:25:03.102 "copy": true, 00:25:03.102 "nvme_iov_md": false 00:25:03.102 }, 00:25:03.102 "memory_domains": [ 00:25:03.102 { 00:25:03.102 "dma_device_id": "system", 00:25:03.102 "dma_device_type": 1 00:25:03.102 }, 00:25:03.102 { 00:25:03.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.102 "dma_device_type": 2 00:25:03.102 } 00:25:03.102 ], 00:25:03.102 "driver_specific": {} 00:25:03.102 } 00:25:03.102 ] 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.102 "name": "Existed_Raid", 00:25:03.102 "uuid": "3dfcbbde-6c3f-449e-961b-82094dc9a30e", 00:25:03.102 "strip_size_kb": 0, 00:25:03.102 "state": "online", 00:25:03.102 "raid_level": "raid1", 00:25:03.102 "superblock": true, 00:25:03.102 "num_base_bdevs": 2, 00:25:03.102 "num_base_bdevs_discovered": 2, 00:25:03.102 "num_base_bdevs_operational": 2, 00:25:03.102 "base_bdevs_list": [ 00:25:03.102 { 00:25:03.102 "name": "BaseBdev1", 00:25:03.102 "uuid": "2a254155-659e-4e5d-a8be-1be8f741c8c5", 00:25:03.102 "is_configured": true, 00:25:03.102 "data_offset": 256, 00:25:03.102 "data_size": 7936 00:25:03.102 }, 00:25:03.102 { 00:25:03.102 "name": "BaseBdev2", 00:25:03.102 "uuid": "b46adbf6-3495-4467-a6b5-e75827cd4776", 00:25:03.102 "is_configured": true, 00:25:03.102 "data_offset": 256, 00:25:03.102 "data_size": 7936 00:25:03.102 } 00:25:03.102 ] 00:25:03.102 }' 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.102 13:19:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:03.669 [2024-12-06 13:19:50.491389] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.669 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:03.669 "name": "Existed_Raid", 00:25:03.669 "aliases": [ 00:25:03.669 "3dfcbbde-6c3f-449e-961b-82094dc9a30e" 00:25:03.669 ], 00:25:03.669 "product_name": "Raid Volume", 00:25:03.669 "block_size": 4128, 00:25:03.669 "num_blocks": 7936, 00:25:03.669 "uuid": "3dfcbbde-6c3f-449e-961b-82094dc9a30e", 00:25:03.669 "md_size": 32, 00:25:03.669 "md_interleave": true, 00:25:03.669 "dif_type": 0, 00:25:03.669 "assigned_rate_limits": { 00:25:03.669 "rw_ios_per_sec": 0, 00:25:03.669 "rw_mbytes_per_sec": 0, 00:25:03.669 "r_mbytes_per_sec": 0, 00:25:03.670 "w_mbytes_per_sec": 0 00:25:03.670 }, 00:25:03.670 "claimed": false, 00:25:03.670 "zoned": false, 00:25:03.670 "supported_io_types": { 00:25:03.670 "read": true, 00:25:03.670 "write": true, 00:25:03.670 "unmap": false, 00:25:03.670 "flush": false, 00:25:03.670 "reset": true, 00:25:03.670 "nvme_admin": false, 00:25:03.670 "nvme_io": false, 00:25:03.670 "nvme_io_md": false, 00:25:03.670 "write_zeroes": true, 00:25:03.670 "zcopy": false, 00:25:03.670 "get_zone_info": false, 00:25:03.670 "zone_management": false, 00:25:03.670 "zone_append": false, 00:25:03.670 "compare": false, 00:25:03.670 "compare_and_write": false, 00:25:03.670 "abort": false, 00:25:03.670 "seek_hole": false, 00:25:03.670 "seek_data": false, 00:25:03.670 "copy": false, 00:25:03.670 "nvme_iov_md": false 00:25:03.670 }, 00:25:03.670 "memory_domains": [ 00:25:03.670 { 00:25:03.670 "dma_device_id": "system", 00:25:03.670 "dma_device_type": 1 00:25:03.670 }, 00:25:03.670 { 00:25:03.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.670 "dma_device_type": 2 00:25:03.670 }, 00:25:03.670 { 00:25:03.670 "dma_device_id": "system", 00:25:03.670 "dma_device_type": 1 00:25:03.670 }, 00:25:03.670 { 00:25:03.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.670 "dma_device_type": 2 00:25:03.670 } 00:25:03.670 ], 00:25:03.670 "driver_specific": { 00:25:03.670 "raid": { 00:25:03.670 "uuid": "3dfcbbde-6c3f-449e-961b-82094dc9a30e", 00:25:03.670 "strip_size_kb": 0, 00:25:03.670 "state": "online", 00:25:03.670 "raid_level": "raid1", 00:25:03.670 "superblock": true, 00:25:03.670 "num_base_bdevs": 2, 00:25:03.670 "num_base_bdevs_discovered": 2, 00:25:03.670 "num_base_bdevs_operational": 2, 00:25:03.670 "base_bdevs_list": [ 00:25:03.670 { 00:25:03.670 "name": "BaseBdev1", 00:25:03.670 "uuid": "2a254155-659e-4e5d-a8be-1be8f741c8c5", 00:25:03.670 "is_configured": true, 00:25:03.670 "data_offset": 256, 00:25:03.670 "data_size": 7936 00:25:03.670 }, 00:25:03.670 { 00:25:03.670 "name": "BaseBdev2", 00:25:03.670 "uuid": "b46adbf6-3495-4467-a6b5-e75827cd4776", 00:25:03.670 "is_configured": true, 00:25:03.670 "data_offset": 256, 00:25:03.670 "data_size": 7936 00:25:03.670 } 00:25:03.670 ] 00:25:03.670 } 00:25:03.670 } 00:25:03.670 }' 00:25:03.670 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:03.670 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:03.670 BaseBdev2' 00:25:03.670 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:03.670 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:03.670 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:03.670 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:03.670 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:03.670 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.670 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:03.670 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:03.929 [2024-12-06 13:19:50.747081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.929 "name": "Existed_Raid", 00:25:03.929 "uuid": "3dfcbbde-6c3f-449e-961b-82094dc9a30e", 00:25:03.929 "strip_size_kb": 0, 00:25:03.929 "state": "online", 00:25:03.929 "raid_level": "raid1", 00:25:03.929 "superblock": true, 00:25:03.929 "num_base_bdevs": 2, 00:25:03.929 "num_base_bdevs_discovered": 1, 00:25:03.929 "num_base_bdevs_operational": 1, 00:25:03.929 "base_bdevs_list": [ 00:25:03.929 { 00:25:03.929 "name": null, 00:25:03.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.929 "is_configured": false, 00:25:03.929 "data_offset": 0, 00:25:03.929 "data_size": 7936 00:25:03.929 }, 00:25:03.929 { 00:25:03.929 "name": "BaseBdev2", 00:25:03.929 "uuid": "b46adbf6-3495-4467-a6b5-e75827cd4776", 00:25:03.929 "is_configured": true, 00:25:03.929 "data_offset": 256, 00:25:03.929 "data_size": 7936 00:25:03.929 } 00:25:03.929 ] 00:25:03.929 }' 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.929 13:19:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:04.497 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:04.497 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:04.497 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.497 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:04.497 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.497 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:04.498 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.498 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:04.498 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:04.498 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:04.498 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.498 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:04.498 [2024-12-06 13:19:51.422909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:04.498 [2024-12-06 13:19:51.423097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:04.757 [2024-12-06 13:19:51.515028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:04.757 [2024-12-06 13:19:51.515143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:04.757 [2024-12-06 13:19:51.515166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89329 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89329 ']' 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89329 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89329 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.757 killing process with pid 89329 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89329' 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89329 00:25:04.757 [2024-12-06 13:19:51.602615] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:04.757 13:19:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89329 00:25:04.757 [2024-12-06 13:19:51.617641] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:05.691 13:19:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:25:05.691 00:25:05.691 real 0m5.541s 00:25:05.691 user 0m8.326s 00:25:05.691 sys 0m0.854s 00:25:05.691 13:19:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.691 13:19:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:05.691 ************************************ 00:25:05.691 END TEST raid_state_function_test_sb_md_interleaved 00:25:05.692 ************************************ 00:25:05.958 13:19:52 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:25:05.958 13:19:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:05.958 13:19:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.958 13:19:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:05.958 ************************************ 00:25:05.958 START TEST raid_superblock_test_md_interleaved 00:25:05.958 ************************************ 00:25:05.958 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:25:05.958 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:05.958 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:25:05.958 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:05.958 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89580 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89580 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89580 ']' 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.959 13:19:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:05.959 [2024-12-06 13:19:52.857537] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:25:05.959 [2024-12-06 13:19:52.857735] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89580 ] 00:25:06.217 [2024-12-06 13:19:53.046326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.217 [2024-12-06 13:19:53.188007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.476 [2024-12-06 13:19:53.411723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:06.476 [2024-12-06 13:19:53.411818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:07.063 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.063 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:25:07.063 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.064 malloc1 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.064 [2024-12-06 13:19:53.850186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:07.064 [2024-12-06 13:19:53.850274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.064 [2024-12-06 13:19:53.850308] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:07.064 [2024-12-06 13:19:53.850322] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.064 [2024-12-06 13:19:53.852969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.064 [2024-12-06 13:19:53.853008] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:07.064 pt1 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.064 malloc2 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.064 [2024-12-06 13:19:53.904869] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:07.064 [2024-12-06 13:19:53.904970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.064 [2024-12-06 13:19:53.905002] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:07.064 [2024-12-06 13:19:53.905016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.064 [2024-12-06 13:19:53.907684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.064 [2024-12-06 13:19:53.907741] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:07.064 pt2 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.064 [2024-12-06 13:19:53.912940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:07.064 [2024-12-06 13:19:53.915419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:07.064 [2024-12-06 13:19:53.915681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:07.064 [2024-12-06 13:19:53.915716] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:07.064 [2024-12-06 13:19:53.915807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:07.064 [2024-12-06 13:19:53.915917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:07.064 [2024-12-06 13:19:53.915934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:07.064 [2024-12-06 13:19:53.916020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.064 "name": "raid_bdev1", 00:25:07.064 "uuid": "3608cfed-ebd0-4812-8053-f1e3e5d08355", 00:25:07.064 "strip_size_kb": 0, 00:25:07.064 "state": "online", 00:25:07.064 "raid_level": "raid1", 00:25:07.064 "superblock": true, 00:25:07.064 "num_base_bdevs": 2, 00:25:07.064 "num_base_bdevs_discovered": 2, 00:25:07.064 "num_base_bdevs_operational": 2, 00:25:07.064 "base_bdevs_list": [ 00:25:07.064 { 00:25:07.064 "name": "pt1", 00:25:07.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:07.064 "is_configured": true, 00:25:07.064 "data_offset": 256, 00:25:07.064 "data_size": 7936 00:25:07.064 }, 00:25:07.064 { 00:25:07.064 "name": "pt2", 00:25:07.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:07.064 "is_configured": true, 00:25:07.064 "data_offset": 256, 00:25:07.064 "data_size": 7936 00:25:07.064 } 00:25:07.064 ] 00:25:07.064 }' 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.064 13:19:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.645 [2024-12-06 13:19:54.453609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:07.645 "name": "raid_bdev1", 00:25:07.645 "aliases": [ 00:25:07.645 "3608cfed-ebd0-4812-8053-f1e3e5d08355" 00:25:07.645 ], 00:25:07.645 "product_name": "Raid Volume", 00:25:07.645 "block_size": 4128, 00:25:07.645 "num_blocks": 7936, 00:25:07.645 "uuid": "3608cfed-ebd0-4812-8053-f1e3e5d08355", 00:25:07.645 "md_size": 32, 00:25:07.645 "md_interleave": true, 00:25:07.645 "dif_type": 0, 00:25:07.645 "assigned_rate_limits": { 00:25:07.645 "rw_ios_per_sec": 0, 00:25:07.645 "rw_mbytes_per_sec": 0, 00:25:07.645 "r_mbytes_per_sec": 0, 00:25:07.645 "w_mbytes_per_sec": 0 00:25:07.645 }, 00:25:07.645 "claimed": false, 00:25:07.645 "zoned": false, 00:25:07.645 "supported_io_types": { 00:25:07.645 "read": true, 00:25:07.645 "write": true, 00:25:07.645 "unmap": false, 00:25:07.645 "flush": false, 00:25:07.645 "reset": true, 00:25:07.645 "nvme_admin": false, 00:25:07.645 "nvme_io": false, 00:25:07.645 "nvme_io_md": false, 00:25:07.645 "write_zeroes": true, 00:25:07.645 "zcopy": false, 00:25:07.645 "get_zone_info": false, 00:25:07.645 "zone_management": false, 00:25:07.645 "zone_append": false, 00:25:07.645 "compare": false, 00:25:07.645 "compare_and_write": false, 00:25:07.645 "abort": false, 00:25:07.645 "seek_hole": false, 00:25:07.645 "seek_data": false, 00:25:07.645 "copy": false, 00:25:07.645 "nvme_iov_md": false 00:25:07.645 }, 00:25:07.645 "memory_domains": [ 00:25:07.645 { 00:25:07.645 "dma_device_id": "system", 00:25:07.645 "dma_device_type": 1 00:25:07.645 }, 00:25:07.645 { 00:25:07.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.645 "dma_device_type": 2 00:25:07.645 }, 00:25:07.645 { 00:25:07.645 "dma_device_id": "system", 00:25:07.645 "dma_device_type": 1 00:25:07.645 }, 00:25:07.645 { 00:25:07.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.645 "dma_device_type": 2 00:25:07.645 } 00:25:07.645 ], 00:25:07.645 "driver_specific": { 00:25:07.645 "raid": { 00:25:07.645 "uuid": "3608cfed-ebd0-4812-8053-f1e3e5d08355", 00:25:07.645 "strip_size_kb": 0, 00:25:07.645 "state": "online", 00:25:07.645 "raid_level": "raid1", 00:25:07.645 "superblock": true, 00:25:07.645 "num_base_bdevs": 2, 00:25:07.645 "num_base_bdevs_discovered": 2, 00:25:07.645 "num_base_bdevs_operational": 2, 00:25:07.645 "base_bdevs_list": [ 00:25:07.645 { 00:25:07.645 "name": "pt1", 00:25:07.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:07.645 "is_configured": true, 00:25:07.645 "data_offset": 256, 00:25:07.645 "data_size": 7936 00:25:07.645 }, 00:25:07.645 { 00:25:07.645 "name": "pt2", 00:25:07.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:07.645 "is_configured": true, 00:25:07.645 "data_offset": 256, 00:25:07.645 "data_size": 7936 00:25:07.645 } 00:25:07.645 ] 00:25:07.645 } 00:25:07.645 } 00:25:07.645 }' 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:07.645 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:07.645 pt2' 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.646 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.905 [2024-12-06 13:19:54.705656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3608cfed-ebd0-4812-8053-f1e3e5d08355 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 3608cfed-ebd0-4812-8053-f1e3e5d08355 ']' 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.905 [2024-12-06 13:19:54.753199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:07.905 [2024-12-06 13:19:54.753238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:07.905 [2024-12-06 13:19:54.753370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:07.905 [2024-12-06 13:19:54.753471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:07.905 [2024-12-06 13:19:54.753548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:07.905 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.906 [2024-12-06 13:19:54.885225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:07.906 [2024-12-06 13:19:54.888092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:07.906 [2024-12-06 13:19:54.888219] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:07.906 [2024-12-06 13:19:54.888315] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:07.906 [2024-12-06 13:19:54.888342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:07.906 [2024-12-06 13:19:54.888359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:07.906 request: 00:25:07.906 { 00:25:07.906 "name": "raid_bdev1", 00:25:07.906 "raid_level": "raid1", 00:25:07.906 "base_bdevs": [ 00:25:07.906 "malloc1", 00:25:07.906 "malloc2" 00:25:07.906 ], 00:25:07.906 "superblock": false, 00:25:07.906 "method": "bdev_raid_create", 00:25:07.906 "req_id": 1 00:25:07.906 } 00:25:07.906 Got JSON-RPC error response 00:25:07.906 response: 00:25:07.906 { 00:25:07.906 "code": -17, 00:25:07.906 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:07.906 } 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:07.906 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.165 [2024-12-06 13:19:54.949194] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:08.165 [2024-12-06 13:19:54.949283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.165 [2024-12-06 13:19:54.949309] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:08.165 [2024-12-06 13:19:54.949326] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.165 [2024-12-06 13:19:54.952247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.165 [2024-12-06 13:19:54.952303] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:08.165 [2024-12-06 13:19:54.952382] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:08.165 [2024-12-06 13:19:54.952455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:08.165 pt1 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.165 13:19:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.165 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.165 "name": "raid_bdev1", 00:25:08.165 "uuid": "3608cfed-ebd0-4812-8053-f1e3e5d08355", 00:25:08.165 "strip_size_kb": 0, 00:25:08.165 "state": "configuring", 00:25:08.166 "raid_level": "raid1", 00:25:08.166 "superblock": true, 00:25:08.166 "num_base_bdevs": 2, 00:25:08.166 "num_base_bdevs_discovered": 1, 00:25:08.166 "num_base_bdevs_operational": 2, 00:25:08.166 "base_bdevs_list": [ 00:25:08.166 { 00:25:08.166 "name": "pt1", 00:25:08.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:08.166 "is_configured": true, 00:25:08.166 "data_offset": 256, 00:25:08.166 "data_size": 7936 00:25:08.166 }, 00:25:08.166 { 00:25:08.166 "name": null, 00:25:08.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:08.166 "is_configured": false, 00:25:08.166 "data_offset": 256, 00:25:08.166 "data_size": 7936 00:25:08.166 } 00:25:08.166 ] 00:25:08.166 }' 00:25:08.166 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.166 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.732 [2024-12-06 13:19:55.489431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:08.732 [2024-12-06 13:19:55.489585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.732 [2024-12-06 13:19:55.489630] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:08.732 [2024-12-06 13:19:55.489649] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.732 [2024-12-06 13:19:55.489983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.732 [2024-12-06 13:19:55.490022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:08.732 [2024-12-06 13:19:55.490102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:08.732 [2024-12-06 13:19:55.490142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:08.732 [2024-12-06 13:19:55.490303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:08.732 [2024-12-06 13:19:55.490330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:08.732 [2024-12-06 13:19:55.490434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:08.732 [2024-12-06 13:19:55.490578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:08.732 [2024-12-06 13:19:55.490601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:08.732 [2024-12-06 13:19:55.490697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.732 pt2 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.732 "name": "raid_bdev1", 00:25:08.732 "uuid": "3608cfed-ebd0-4812-8053-f1e3e5d08355", 00:25:08.732 "strip_size_kb": 0, 00:25:08.732 "state": "online", 00:25:08.732 "raid_level": "raid1", 00:25:08.732 "superblock": true, 00:25:08.732 "num_base_bdevs": 2, 00:25:08.732 "num_base_bdevs_discovered": 2, 00:25:08.732 "num_base_bdevs_operational": 2, 00:25:08.732 "base_bdevs_list": [ 00:25:08.732 { 00:25:08.732 "name": "pt1", 00:25:08.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:08.732 "is_configured": true, 00:25:08.732 "data_offset": 256, 00:25:08.732 "data_size": 7936 00:25:08.732 }, 00:25:08.732 { 00:25:08.732 "name": "pt2", 00:25:08.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:08.732 "is_configured": true, 00:25:08.732 "data_offset": 256, 00:25:08.732 "data_size": 7936 00:25:08.732 } 00:25:08.732 ] 00:25:08.732 }' 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.732 13:19:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:09.300 [2024-12-06 13:19:56.029964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:09.300 "name": "raid_bdev1", 00:25:09.300 "aliases": [ 00:25:09.300 "3608cfed-ebd0-4812-8053-f1e3e5d08355" 00:25:09.300 ], 00:25:09.300 "product_name": "Raid Volume", 00:25:09.300 "block_size": 4128, 00:25:09.300 "num_blocks": 7936, 00:25:09.300 "uuid": "3608cfed-ebd0-4812-8053-f1e3e5d08355", 00:25:09.300 "md_size": 32, 00:25:09.300 "md_interleave": true, 00:25:09.300 "dif_type": 0, 00:25:09.300 "assigned_rate_limits": { 00:25:09.300 "rw_ios_per_sec": 0, 00:25:09.300 "rw_mbytes_per_sec": 0, 00:25:09.300 "r_mbytes_per_sec": 0, 00:25:09.300 "w_mbytes_per_sec": 0 00:25:09.300 }, 00:25:09.300 "claimed": false, 00:25:09.300 "zoned": false, 00:25:09.300 "supported_io_types": { 00:25:09.300 "read": true, 00:25:09.300 "write": true, 00:25:09.300 "unmap": false, 00:25:09.300 "flush": false, 00:25:09.300 "reset": true, 00:25:09.300 "nvme_admin": false, 00:25:09.300 "nvme_io": false, 00:25:09.300 "nvme_io_md": false, 00:25:09.300 "write_zeroes": true, 00:25:09.300 "zcopy": false, 00:25:09.300 "get_zone_info": false, 00:25:09.300 "zone_management": false, 00:25:09.300 "zone_append": false, 00:25:09.300 "compare": false, 00:25:09.300 "compare_and_write": false, 00:25:09.300 "abort": false, 00:25:09.300 "seek_hole": false, 00:25:09.300 "seek_data": false, 00:25:09.300 "copy": false, 00:25:09.300 "nvme_iov_md": false 00:25:09.300 }, 00:25:09.300 "memory_domains": [ 00:25:09.300 { 00:25:09.300 "dma_device_id": "system", 00:25:09.300 "dma_device_type": 1 00:25:09.300 }, 00:25:09.300 { 00:25:09.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.300 "dma_device_type": 2 00:25:09.300 }, 00:25:09.300 { 00:25:09.300 "dma_device_id": "system", 00:25:09.300 "dma_device_type": 1 00:25:09.300 }, 00:25:09.300 { 00:25:09.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.300 "dma_device_type": 2 00:25:09.300 } 00:25:09.300 ], 00:25:09.300 "driver_specific": { 00:25:09.300 "raid": { 00:25:09.300 "uuid": "3608cfed-ebd0-4812-8053-f1e3e5d08355", 00:25:09.300 "strip_size_kb": 0, 00:25:09.300 "state": "online", 00:25:09.300 "raid_level": "raid1", 00:25:09.300 "superblock": true, 00:25:09.300 "num_base_bdevs": 2, 00:25:09.300 "num_base_bdevs_discovered": 2, 00:25:09.300 "num_base_bdevs_operational": 2, 00:25:09.300 "base_bdevs_list": [ 00:25:09.300 { 00:25:09.300 "name": "pt1", 00:25:09.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:09.300 "is_configured": true, 00:25:09.300 "data_offset": 256, 00:25:09.300 "data_size": 7936 00:25:09.300 }, 00:25:09.300 { 00:25:09.300 "name": "pt2", 00:25:09.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:09.300 "is_configured": true, 00:25:09.300 "data_offset": 256, 00:25:09.300 "data_size": 7936 00:25:09.300 } 00:25:09.300 ] 00:25:09.300 } 00:25:09.300 } 00:25:09.300 }' 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:09.300 pt2' 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:25:09.300 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:09.301 [2024-12-06 13:19:56.289931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:09.301 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 3608cfed-ebd0-4812-8053-f1e3e5d08355 '!=' 3608cfed-ebd0-4812-8053-f1e3e5d08355 ']' 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:09.560 [2024-12-06 13:19:56.333663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.560 "name": "raid_bdev1", 00:25:09.560 "uuid": "3608cfed-ebd0-4812-8053-f1e3e5d08355", 00:25:09.560 "strip_size_kb": 0, 00:25:09.560 "state": "online", 00:25:09.560 "raid_level": "raid1", 00:25:09.560 "superblock": true, 00:25:09.560 "num_base_bdevs": 2, 00:25:09.560 "num_base_bdevs_discovered": 1, 00:25:09.560 "num_base_bdevs_operational": 1, 00:25:09.560 "base_bdevs_list": [ 00:25:09.560 { 00:25:09.560 "name": null, 00:25:09.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.560 "is_configured": false, 00:25:09.560 "data_offset": 0, 00:25:09.560 "data_size": 7936 00:25:09.560 }, 00:25:09.560 { 00:25:09.560 "name": "pt2", 00:25:09.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:09.560 "is_configured": true, 00:25:09.560 "data_offset": 256, 00:25:09.560 "data_size": 7936 00:25:09.560 } 00:25:09.560 ] 00:25:09.560 }' 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.560 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.130 [2024-12-06 13:19:56.865950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:10.130 [2024-12-06 13:19:56.865992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:10.130 [2024-12-06 13:19:56.866117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:10.130 [2024-12-06 13:19:56.866196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:10.130 [2024-12-06 13:19:56.866224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.130 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.130 [2024-12-06 13:19:56.933965] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:10.130 [2024-12-06 13:19:56.934069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.130 [2024-12-06 13:19:56.934099] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:10.130 [2024-12-06 13:19:56.934118] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.130 [2024-12-06 13:19:56.937119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.130 [2024-12-06 13:19:56.937166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:10.130 [2024-12-06 13:19:56.937272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:10.130 [2024-12-06 13:19:56.937347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:10.130 [2024-12-06 13:19:56.937467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:10.130 [2024-12-06 13:19:56.937489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:10.130 [2024-12-06 13:19:56.937641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:10.131 [2024-12-06 13:19:56.937747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:10.131 [2024-12-06 13:19:56.937768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:25:10.131 [2024-12-06 13:19:56.937967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.131 pt2 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.131 "name": "raid_bdev1", 00:25:10.131 "uuid": "3608cfed-ebd0-4812-8053-f1e3e5d08355", 00:25:10.131 "strip_size_kb": 0, 00:25:10.131 "state": "online", 00:25:10.131 "raid_level": "raid1", 00:25:10.131 "superblock": true, 00:25:10.131 "num_base_bdevs": 2, 00:25:10.131 "num_base_bdevs_discovered": 1, 00:25:10.131 "num_base_bdevs_operational": 1, 00:25:10.131 "base_bdevs_list": [ 00:25:10.131 { 00:25:10.131 "name": null, 00:25:10.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.131 "is_configured": false, 00:25:10.131 "data_offset": 256, 00:25:10.131 "data_size": 7936 00:25:10.131 }, 00:25:10.131 { 00:25:10.131 "name": "pt2", 00:25:10.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:10.131 "is_configured": true, 00:25:10.131 "data_offset": 256, 00:25:10.131 "data_size": 7936 00:25:10.131 } 00:25:10.131 ] 00:25:10.131 }' 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.131 13:19:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.698 [2024-12-06 13:19:57.446135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:10.698 [2024-12-06 13:19:57.446179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:10.698 [2024-12-06 13:19:57.446366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:10.698 [2024-12-06 13:19:57.446515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:10.698 [2024-12-06 13:19:57.446539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:25:10.698 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.699 [2024-12-06 13:19:57.506182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:10.699 [2024-12-06 13:19:57.506320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.699 [2024-12-06 13:19:57.506359] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:25:10.699 [2024-12-06 13:19:57.506375] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.699 [2024-12-06 13:19:57.509503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.699 [2024-12-06 13:19:57.509564] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:10.699 [2024-12-06 13:19:57.509649] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:10.699 [2024-12-06 13:19:57.509715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:10.699 [2024-12-06 13:19:57.509868] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:10.699 [2024-12-06 13:19:57.509887] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:10.699 [2024-12-06 13:19:57.509914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:25:10.699 [2024-12-06 13:19:57.509987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:10.699 [2024-12-06 13:19:57.510097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:25:10.699 [2024-12-06 13:19:57.510119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:10.699 [2024-12-06 13:19:57.510214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:10.699 [2024-12-06 13:19:57.510316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:25:10.699 [2024-12-06 13:19:57.510348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:25:10.699 [2024-12-06 13:19:57.510545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.699 pt1 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.699 "name": "raid_bdev1", 00:25:10.699 "uuid": "3608cfed-ebd0-4812-8053-f1e3e5d08355", 00:25:10.699 "strip_size_kb": 0, 00:25:10.699 "state": "online", 00:25:10.699 "raid_level": "raid1", 00:25:10.699 "superblock": true, 00:25:10.699 "num_base_bdevs": 2, 00:25:10.699 "num_base_bdevs_discovered": 1, 00:25:10.699 "num_base_bdevs_operational": 1, 00:25:10.699 "base_bdevs_list": [ 00:25:10.699 { 00:25:10.699 "name": null, 00:25:10.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.699 "is_configured": false, 00:25:10.699 "data_offset": 256, 00:25:10.699 "data_size": 7936 00:25:10.699 }, 00:25:10.699 { 00:25:10.699 "name": "pt2", 00:25:10.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:10.699 "is_configured": true, 00:25:10.699 "data_offset": 256, 00:25:10.699 "data_size": 7936 00:25:10.699 } 00:25:10.699 ] 00:25:10.699 }' 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.699 13:19:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:11.267 [2024-12-06 13:19:58.078680] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 3608cfed-ebd0-4812-8053-f1e3e5d08355 '!=' 3608cfed-ebd0-4812-8053-f1e3e5d08355 ']' 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89580 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89580 ']' 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89580 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89580 00:25:11.267 killing process with pid 89580 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89580' 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89580 00:25:11.267 [2024-12-06 13:19:58.161084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:11.267 13:19:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89580 00:25:11.267 [2024-12-06 13:19:58.161220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:11.267 [2024-12-06 13:19:58.161298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:11.267 [2024-12-06 13:19:58.161323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:25:11.526 [2024-12-06 13:19:58.350669] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:12.905 13:19:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:25:12.905 00:25:12.905 real 0m6.768s 00:25:12.905 user 0m10.573s 00:25:12.905 sys 0m1.060s 00:25:12.905 13:19:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.905 ************************************ 00:25:12.905 END TEST raid_superblock_test_md_interleaved 00:25:12.905 ************************************ 00:25:12.905 13:19:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.905 13:19:59 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:25:12.905 13:19:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:12.905 13:19:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.905 13:19:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:12.905 ************************************ 00:25:12.905 START TEST raid_rebuild_test_sb_md_interleaved 00:25:12.905 ************************************ 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89910 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89910 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89910 ']' 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.905 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.906 13:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:12.906 [2024-12-06 13:19:59.674591] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:25:12.906 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:12.906 Zero copy mechanism will not be used. 00:25:12.906 [2024-12-06 13:19:59.675078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89910 ] 00:25:12.906 [2024-12-06 13:19:59.852855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.164 [2024-12-06 13:20:00.005778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.422 [2024-12-06 13:20:00.239629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:13.422 [2024-12-06 13:20:00.239730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:13.680 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.680 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:25:13.680 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:13.680 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:25:13.680 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.680 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.680 BaseBdev1_malloc 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.940 [2024-12-06 13:20:00.697527] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:13.940 [2024-12-06 13:20:00.697616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.940 [2024-12-06 13:20:00.697657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:13.940 [2024-12-06 13:20:00.697678] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.940 [2024-12-06 13:20:00.700427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.940 BaseBdev1 00:25:13.940 [2024-12-06 13:20:00.700654] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.940 BaseBdev2_malloc 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.940 [2024-12-06 13:20:00.754948] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:13.940 [2024-12-06 13:20:00.755221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.940 [2024-12-06 13:20:00.755296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:13.940 [2024-12-06 13:20:00.755439] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.940 [2024-12-06 13:20:00.758318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.940 [2024-12-06 13:20:00.758510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:13.940 BaseBdev2 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.940 spare_malloc 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.940 spare_delay 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.940 [2024-12-06 13:20:00.826871] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:13.940 [2024-12-06 13:20:00.827119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.940 [2024-12-06 13:20:00.827171] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:13.940 [2024-12-06 13:20:00.827193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.940 [2024-12-06 13:20:00.829918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.940 spare 00:25:13.940 [2024-12-06 13:20:00.830103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.940 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.941 [2024-12-06 13:20:00.835019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:13.941 [2024-12-06 13:20:00.837871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:13.941 [2024-12-06 13:20:00.838291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:13.941 [2024-12-06 13:20:00.838428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:13.941 [2024-12-06 13:20:00.838601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:13.941 [2024-12-06 13:20:00.838830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:13.941 [2024-12-06 13:20:00.838941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:13.941 [2024-12-06 13:20:00.839290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.941 "name": "raid_bdev1", 00:25:13.941 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:13.941 "strip_size_kb": 0, 00:25:13.941 "state": "online", 00:25:13.941 "raid_level": "raid1", 00:25:13.941 "superblock": true, 00:25:13.941 "num_base_bdevs": 2, 00:25:13.941 "num_base_bdevs_discovered": 2, 00:25:13.941 "num_base_bdevs_operational": 2, 00:25:13.941 "base_bdevs_list": [ 00:25:13.941 { 00:25:13.941 "name": "BaseBdev1", 00:25:13.941 "uuid": "77ddced5-d4d1-5111-aab0-1ce13b3bad48", 00:25:13.941 "is_configured": true, 00:25:13.941 "data_offset": 256, 00:25:13.941 "data_size": 7936 00:25:13.941 }, 00:25:13.941 { 00:25:13.941 "name": "BaseBdev2", 00:25:13.941 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:13.941 "is_configured": true, 00:25:13.941 "data_offset": 256, 00:25:13.941 "data_size": 7936 00:25:13.941 } 00:25:13.941 ] 00:25:13.941 }' 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.941 13:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.509 [2024-12-06 13:20:01.343981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.509 [2024-12-06 13:20:01.443510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:14.509 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:14.510 "name": "raid_bdev1", 00:25:14.510 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:14.510 "strip_size_kb": 0, 00:25:14.510 "state": "online", 00:25:14.510 "raid_level": "raid1", 00:25:14.510 "superblock": true, 00:25:14.510 "num_base_bdevs": 2, 00:25:14.510 "num_base_bdevs_discovered": 1, 00:25:14.510 "num_base_bdevs_operational": 1, 00:25:14.510 "base_bdevs_list": [ 00:25:14.510 { 00:25:14.510 "name": null, 00:25:14.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.510 "is_configured": false, 00:25:14.510 "data_offset": 0, 00:25:14.510 "data_size": 7936 00:25:14.510 }, 00:25:14.510 { 00:25:14.510 "name": "BaseBdev2", 00:25:14.510 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:14.510 "is_configured": true, 00:25:14.510 "data_offset": 256, 00:25:14.510 "data_size": 7936 00:25:14.510 } 00:25:14.510 ] 00:25:14.510 }' 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:14.510 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.076 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:15.076 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.076 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:15.076 [2024-12-06 13:20:01.947736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:15.076 [2024-12-06 13:20:01.966120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:15.076 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.076 13:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:25:15.076 [2024-12-06 13:20:01.968895] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:16.012 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:16.012 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:16.012 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:16.012 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:16.012 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:16.012 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.012 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.012 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.012 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.012 13:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.270 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:16.270 "name": "raid_bdev1", 00:25:16.270 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:16.270 "strip_size_kb": 0, 00:25:16.270 "state": "online", 00:25:16.270 "raid_level": "raid1", 00:25:16.270 "superblock": true, 00:25:16.271 "num_base_bdevs": 2, 00:25:16.271 "num_base_bdevs_discovered": 2, 00:25:16.271 "num_base_bdevs_operational": 2, 00:25:16.271 "process": { 00:25:16.271 "type": "rebuild", 00:25:16.271 "target": "spare", 00:25:16.271 "progress": { 00:25:16.271 "blocks": 2560, 00:25:16.271 "percent": 32 00:25:16.271 } 00:25:16.271 }, 00:25:16.271 "base_bdevs_list": [ 00:25:16.271 { 00:25:16.271 "name": "spare", 00:25:16.271 "uuid": "165d5122-ea34-5fae-b71f-31972b653deb", 00:25:16.271 "is_configured": true, 00:25:16.271 "data_offset": 256, 00:25:16.271 "data_size": 7936 00:25:16.271 }, 00:25:16.271 { 00:25:16.271 "name": "BaseBdev2", 00:25:16.271 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:16.271 "is_configured": true, 00:25:16.271 "data_offset": 256, 00:25:16.271 "data_size": 7936 00:25:16.271 } 00:25:16.271 ] 00:25:16.271 }' 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.271 [2024-12-06 13:20:03.138804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:16.271 [2024-12-06 13:20:03.180503] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:16.271 [2024-12-06 13:20:03.180667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.271 [2024-12-06 13:20:03.180697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:16.271 [2024-12-06 13:20:03.180719] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.271 "name": "raid_bdev1", 00:25:16.271 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:16.271 "strip_size_kb": 0, 00:25:16.271 "state": "online", 00:25:16.271 "raid_level": "raid1", 00:25:16.271 "superblock": true, 00:25:16.271 "num_base_bdevs": 2, 00:25:16.271 "num_base_bdevs_discovered": 1, 00:25:16.271 "num_base_bdevs_operational": 1, 00:25:16.271 "base_bdevs_list": [ 00:25:16.271 { 00:25:16.271 "name": null, 00:25:16.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.271 "is_configured": false, 00:25:16.271 "data_offset": 0, 00:25:16.271 "data_size": 7936 00:25:16.271 }, 00:25:16.271 { 00:25:16.271 "name": "BaseBdev2", 00:25:16.271 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:16.271 "is_configured": true, 00:25:16.271 "data_offset": 256, 00:25:16.271 "data_size": 7936 00:25:16.271 } 00:25:16.271 ] 00:25:16.271 }' 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.271 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:16.837 "name": "raid_bdev1", 00:25:16.837 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:16.837 "strip_size_kb": 0, 00:25:16.837 "state": "online", 00:25:16.837 "raid_level": "raid1", 00:25:16.837 "superblock": true, 00:25:16.837 "num_base_bdevs": 2, 00:25:16.837 "num_base_bdevs_discovered": 1, 00:25:16.837 "num_base_bdevs_operational": 1, 00:25:16.837 "base_bdevs_list": [ 00:25:16.837 { 00:25:16.837 "name": null, 00:25:16.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.837 "is_configured": false, 00:25:16.837 "data_offset": 0, 00:25:16.837 "data_size": 7936 00:25:16.837 }, 00:25:16.837 { 00:25:16.837 "name": "BaseBdev2", 00:25:16.837 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:16.837 "is_configured": true, 00:25:16.837 "data_offset": 256, 00:25:16.837 "data_size": 7936 00:25:16.837 } 00:25:16.837 ] 00:25:16.837 }' 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:16.837 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:17.094 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:17.094 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:17.094 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.094 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:17.094 [2024-12-06 13:20:03.863439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:17.094 [2024-12-06 13:20:03.881034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:17.094 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.094 13:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:25:17.094 [2024-12-06 13:20:03.883865] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:18.026 "name": "raid_bdev1", 00:25:18.026 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:18.026 "strip_size_kb": 0, 00:25:18.026 "state": "online", 00:25:18.026 "raid_level": "raid1", 00:25:18.026 "superblock": true, 00:25:18.026 "num_base_bdevs": 2, 00:25:18.026 "num_base_bdevs_discovered": 2, 00:25:18.026 "num_base_bdevs_operational": 2, 00:25:18.026 "process": { 00:25:18.026 "type": "rebuild", 00:25:18.026 "target": "spare", 00:25:18.026 "progress": { 00:25:18.026 "blocks": 2304, 00:25:18.026 "percent": 29 00:25:18.026 } 00:25:18.026 }, 00:25:18.026 "base_bdevs_list": [ 00:25:18.026 { 00:25:18.026 "name": "spare", 00:25:18.026 "uuid": "165d5122-ea34-5fae-b71f-31972b653deb", 00:25:18.026 "is_configured": true, 00:25:18.026 "data_offset": 256, 00:25:18.026 "data_size": 7936 00:25:18.026 }, 00:25:18.026 { 00:25:18.026 "name": "BaseBdev2", 00:25:18.026 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:18.026 "is_configured": true, 00:25:18.026 "data_offset": 256, 00:25:18.026 "data_size": 7936 00:25:18.026 } 00:25:18.026 ] 00:25:18.026 }' 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:18.026 13:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:25:18.026 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=816 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:18.026 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:18.283 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.283 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.283 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.283 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:18.283 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.283 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:18.283 "name": "raid_bdev1", 00:25:18.283 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:18.283 "strip_size_kb": 0, 00:25:18.283 "state": "online", 00:25:18.283 "raid_level": "raid1", 00:25:18.283 "superblock": true, 00:25:18.283 "num_base_bdevs": 2, 00:25:18.283 "num_base_bdevs_discovered": 2, 00:25:18.283 "num_base_bdevs_operational": 2, 00:25:18.283 "process": { 00:25:18.283 "type": "rebuild", 00:25:18.283 "target": "spare", 00:25:18.283 "progress": { 00:25:18.283 "blocks": 2816, 00:25:18.283 "percent": 35 00:25:18.283 } 00:25:18.283 }, 00:25:18.283 "base_bdevs_list": [ 00:25:18.283 { 00:25:18.283 "name": "spare", 00:25:18.283 "uuid": "165d5122-ea34-5fae-b71f-31972b653deb", 00:25:18.283 "is_configured": true, 00:25:18.283 "data_offset": 256, 00:25:18.283 "data_size": 7936 00:25:18.283 }, 00:25:18.283 { 00:25:18.283 "name": "BaseBdev2", 00:25:18.283 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:18.284 "is_configured": true, 00:25:18.284 "data_offset": 256, 00:25:18.284 "data_size": 7936 00:25:18.284 } 00:25:18.284 ] 00:25:18.284 }' 00:25:18.284 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:18.284 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:18.284 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:18.284 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.284 13:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:19.220 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:19.220 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:19.220 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:19.220 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:19.220 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:19.220 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:19.220 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.220 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.220 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.220 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:19.220 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.479 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:19.479 "name": "raid_bdev1", 00:25:19.479 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:19.479 "strip_size_kb": 0, 00:25:19.479 "state": "online", 00:25:19.479 "raid_level": "raid1", 00:25:19.479 "superblock": true, 00:25:19.479 "num_base_bdevs": 2, 00:25:19.479 "num_base_bdevs_discovered": 2, 00:25:19.479 "num_base_bdevs_operational": 2, 00:25:19.479 "process": { 00:25:19.479 "type": "rebuild", 00:25:19.479 "target": "spare", 00:25:19.479 "progress": { 00:25:19.479 "blocks": 5888, 00:25:19.479 "percent": 74 00:25:19.479 } 00:25:19.479 }, 00:25:19.479 "base_bdevs_list": [ 00:25:19.479 { 00:25:19.479 "name": "spare", 00:25:19.479 "uuid": "165d5122-ea34-5fae-b71f-31972b653deb", 00:25:19.479 "is_configured": true, 00:25:19.479 "data_offset": 256, 00:25:19.479 "data_size": 7936 00:25:19.479 }, 00:25:19.479 { 00:25:19.479 "name": "BaseBdev2", 00:25:19.479 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:19.479 "is_configured": true, 00:25:19.479 "data_offset": 256, 00:25:19.479 "data_size": 7936 00:25:19.479 } 00:25:19.479 ] 00:25:19.479 }' 00:25:19.479 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:19.479 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:19.479 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:19.479 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:19.479 13:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:25:20.059 [2024-12-06 13:20:07.012722] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:20.059 [2024-12-06 13:20:07.012869] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:20.059 [2024-12-06 13:20:07.013071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:20.626 "name": "raid_bdev1", 00:25:20.626 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:20.626 "strip_size_kb": 0, 00:25:20.626 "state": "online", 00:25:20.626 "raid_level": "raid1", 00:25:20.626 "superblock": true, 00:25:20.626 "num_base_bdevs": 2, 00:25:20.626 "num_base_bdevs_discovered": 2, 00:25:20.626 "num_base_bdevs_operational": 2, 00:25:20.626 "base_bdevs_list": [ 00:25:20.626 { 00:25:20.626 "name": "spare", 00:25:20.626 "uuid": "165d5122-ea34-5fae-b71f-31972b653deb", 00:25:20.626 "is_configured": true, 00:25:20.626 "data_offset": 256, 00:25:20.626 "data_size": 7936 00:25:20.626 }, 00:25:20.626 { 00:25:20.626 "name": "BaseBdev2", 00:25:20.626 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:20.626 "is_configured": true, 00:25:20.626 "data_offset": 256, 00:25:20.626 "data_size": 7936 00:25:20.626 } 00:25:20.626 ] 00:25:20.626 }' 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:20.626 "name": "raid_bdev1", 00:25:20.626 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:20.626 "strip_size_kb": 0, 00:25:20.626 "state": "online", 00:25:20.626 "raid_level": "raid1", 00:25:20.626 "superblock": true, 00:25:20.626 "num_base_bdevs": 2, 00:25:20.626 "num_base_bdevs_discovered": 2, 00:25:20.626 "num_base_bdevs_operational": 2, 00:25:20.626 "base_bdevs_list": [ 00:25:20.626 { 00:25:20.626 "name": "spare", 00:25:20.626 "uuid": "165d5122-ea34-5fae-b71f-31972b653deb", 00:25:20.626 "is_configured": true, 00:25:20.626 "data_offset": 256, 00:25:20.626 "data_size": 7936 00:25:20.626 }, 00:25:20.626 { 00:25:20.626 "name": "BaseBdev2", 00:25:20.626 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:20.626 "is_configured": true, 00:25:20.626 "data_offset": 256, 00:25:20.626 "data_size": 7936 00:25:20.626 } 00:25:20.626 ] 00:25:20.626 }' 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:20.626 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.884 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.884 "name": "raid_bdev1", 00:25:20.884 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:20.884 "strip_size_kb": 0, 00:25:20.884 "state": "online", 00:25:20.884 "raid_level": "raid1", 00:25:20.884 "superblock": true, 00:25:20.884 "num_base_bdevs": 2, 00:25:20.884 "num_base_bdevs_discovered": 2, 00:25:20.884 "num_base_bdevs_operational": 2, 00:25:20.884 "base_bdevs_list": [ 00:25:20.884 { 00:25:20.884 "name": "spare", 00:25:20.884 "uuid": "165d5122-ea34-5fae-b71f-31972b653deb", 00:25:20.884 "is_configured": true, 00:25:20.884 "data_offset": 256, 00:25:20.884 "data_size": 7936 00:25:20.884 }, 00:25:20.884 { 00:25:20.884 "name": "BaseBdev2", 00:25:20.884 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:20.884 "is_configured": true, 00:25:20.884 "data_offset": 256, 00:25:20.884 "data_size": 7936 00:25:20.884 } 00:25:20.885 ] 00:25:20.885 }' 00:25:20.885 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.885 13:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.170 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:21.170 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.170 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.170 [2024-12-06 13:20:08.174820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:21.170 [2024-12-06 13:20:08.174869] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:21.170 [2024-12-06 13:20:08.175000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:21.170 [2024-12-06 13:20:08.175122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:21.170 [2024-12-06 13:20:08.175142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:21.170 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.170 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.170 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:25:21.170 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.170 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.428 [2024-12-06 13:20:08.242799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:21.428 [2024-12-06 13:20:08.242981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.428 [2024-12-06 13:20:08.243071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:21.428 [2024-12-06 13:20:08.243264] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.428 [2024-12-06 13:20:08.246032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.428 [2024-12-06 13:20:08.246077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:21.428 [2024-12-06 13:20:08.246160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:21.428 [2024-12-06 13:20:08.246227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:21.428 [2024-12-06 13:20:08.246378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:21.428 spare 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.428 [2024-12-06 13:20:08.346521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:21.428 [2024-12-06 13:20:08.346688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:25:21.428 [2024-12-06 13:20:08.346839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:21.428 [2024-12-06 13:20:08.347167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:21.428 [2024-12-06 13:20:08.347191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:21.428 [2024-12-06 13:20:08.347322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.428 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.429 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.429 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.429 "name": "raid_bdev1", 00:25:21.429 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:21.429 "strip_size_kb": 0, 00:25:21.429 "state": "online", 00:25:21.429 "raid_level": "raid1", 00:25:21.429 "superblock": true, 00:25:21.429 "num_base_bdevs": 2, 00:25:21.429 "num_base_bdevs_discovered": 2, 00:25:21.429 "num_base_bdevs_operational": 2, 00:25:21.429 "base_bdevs_list": [ 00:25:21.429 { 00:25:21.429 "name": "spare", 00:25:21.429 "uuid": "165d5122-ea34-5fae-b71f-31972b653deb", 00:25:21.429 "is_configured": true, 00:25:21.429 "data_offset": 256, 00:25:21.429 "data_size": 7936 00:25:21.429 }, 00:25:21.429 { 00:25:21.429 "name": "BaseBdev2", 00:25:21.429 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:21.429 "is_configured": true, 00:25:21.429 "data_offset": 256, 00:25:21.429 "data_size": 7936 00:25:21.429 } 00:25:21.429 ] 00:25:21.429 }' 00:25:21.429 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.429 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.996 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:21.996 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:21.996 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:21.996 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:21.996 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:21.996 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.996 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.996 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.996 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:21.996 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.996 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:21.996 "name": "raid_bdev1", 00:25:21.996 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:21.996 "strip_size_kb": 0, 00:25:21.996 "state": "online", 00:25:21.996 "raid_level": "raid1", 00:25:21.996 "superblock": true, 00:25:21.996 "num_base_bdevs": 2, 00:25:21.996 "num_base_bdevs_discovered": 2, 00:25:21.996 "num_base_bdevs_operational": 2, 00:25:21.996 "base_bdevs_list": [ 00:25:21.996 { 00:25:21.996 "name": "spare", 00:25:21.996 "uuid": "165d5122-ea34-5fae-b71f-31972b653deb", 00:25:21.996 "is_configured": true, 00:25:21.997 "data_offset": 256, 00:25:21.997 "data_size": 7936 00:25:21.997 }, 00:25:21.997 { 00:25:21.997 "name": "BaseBdev2", 00:25:21.997 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:21.997 "is_configured": true, 00:25:21.997 "data_offset": 256, 00:25:21.997 "data_size": 7936 00:25:21.997 } 00:25:21.997 ] 00:25:21.997 }' 00:25:21.997 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:21.997 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:21.997 13:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:22.255 [2024-12-06 13:20:09.067679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.255 "name": "raid_bdev1", 00:25:22.255 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:22.255 "strip_size_kb": 0, 00:25:22.255 "state": "online", 00:25:22.255 "raid_level": "raid1", 00:25:22.255 "superblock": true, 00:25:22.255 "num_base_bdevs": 2, 00:25:22.255 "num_base_bdevs_discovered": 1, 00:25:22.255 "num_base_bdevs_operational": 1, 00:25:22.255 "base_bdevs_list": [ 00:25:22.255 { 00:25:22.255 "name": null, 00:25:22.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.255 "is_configured": false, 00:25:22.255 "data_offset": 0, 00:25:22.255 "data_size": 7936 00:25:22.255 }, 00:25:22.255 { 00:25:22.255 "name": "BaseBdev2", 00:25:22.255 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:22.255 "is_configured": true, 00:25:22.255 "data_offset": 256, 00:25:22.255 "data_size": 7936 00:25:22.255 } 00:25:22.255 ] 00:25:22.255 }' 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.255 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:22.823 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:25:22.823 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.823 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:22.823 [2024-12-06 13:20:09.539813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:22.823 [2024-12-06 13:20:09.540120] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:22.823 [2024-12-06 13:20:09.540151] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:22.823 [2024-12-06 13:20:09.540212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:22.823 [2024-12-06 13:20:09.556784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:22.823 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.823 13:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:25:22.823 [2024-12-06 13:20:09.559510] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:23.757 "name": "raid_bdev1", 00:25:23.757 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:23.757 "strip_size_kb": 0, 00:25:23.757 "state": "online", 00:25:23.757 "raid_level": "raid1", 00:25:23.757 "superblock": true, 00:25:23.757 "num_base_bdevs": 2, 00:25:23.757 "num_base_bdevs_discovered": 2, 00:25:23.757 "num_base_bdevs_operational": 2, 00:25:23.757 "process": { 00:25:23.757 "type": "rebuild", 00:25:23.757 "target": "spare", 00:25:23.757 "progress": { 00:25:23.757 "blocks": 2304, 00:25:23.757 "percent": 29 00:25:23.757 } 00:25:23.757 }, 00:25:23.757 "base_bdevs_list": [ 00:25:23.757 { 00:25:23.757 "name": "spare", 00:25:23.757 "uuid": "165d5122-ea34-5fae-b71f-31972b653deb", 00:25:23.757 "is_configured": true, 00:25:23.757 "data_offset": 256, 00:25:23.757 "data_size": 7936 00:25:23.757 }, 00:25:23.757 { 00:25:23.757 "name": "BaseBdev2", 00:25:23.757 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:23.757 "is_configured": true, 00:25:23.757 "data_offset": 256, 00:25:23.757 "data_size": 7936 00:25:23.757 } 00:25:23.757 ] 00:25:23.757 }' 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.757 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:23.757 [2024-12-06 13:20:10.721742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:24.016 [2024-12-06 13:20:10.771770] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:24.016 [2024-12-06 13:20:10.772005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:24.016 [2024-12-06 13:20:10.772040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:24.016 [2024-12-06 13:20:10.772058] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:24.016 "name": "raid_bdev1", 00:25:24.016 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:24.016 "strip_size_kb": 0, 00:25:24.016 "state": "online", 00:25:24.016 "raid_level": "raid1", 00:25:24.016 "superblock": true, 00:25:24.016 "num_base_bdevs": 2, 00:25:24.016 "num_base_bdevs_discovered": 1, 00:25:24.016 "num_base_bdevs_operational": 1, 00:25:24.016 "base_bdevs_list": [ 00:25:24.016 { 00:25:24.016 "name": null, 00:25:24.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.016 "is_configured": false, 00:25:24.016 "data_offset": 0, 00:25:24.016 "data_size": 7936 00:25:24.016 }, 00:25:24.016 { 00:25:24.016 "name": "BaseBdev2", 00:25:24.016 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:24.016 "is_configured": true, 00:25:24.016 "data_offset": 256, 00:25:24.016 "data_size": 7936 00:25:24.016 } 00:25:24.016 ] 00:25:24.016 }' 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:24.016 13:20:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:24.582 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:25:24.582 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.582 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:24.582 [2024-12-06 13:20:11.390033] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:24.582 [2024-12-06 13:20:11.390141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.582 [2024-12-06 13:20:11.390187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:24.582 [2024-12-06 13:20:11.390208] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.582 [2024-12-06 13:20:11.390542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.582 [2024-12-06 13:20:11.390577] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:24.582 [2024-12-06 13:20:11.390666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:24.582 [2024-12-06 13:20:11.390692] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:25:24.582 [2024-12-06 13:20:11.390706] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:24.582 [2024-12-06 13:20:11.390749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:24.582 spare 00:25:24.582 [2024-12-06 13:20:11.407348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:24.582 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.582 13:20:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:25:24.582 [2024-12-06 13:20:11.410021] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:25.516 "name": "raid_bdev1", 00:25:25.516 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:25.516 "strip_size_kb": 0, 00:25:25.516 "state": "online", 00:25:25.516 "raid_level": "raid1", 00:25:25.516 "superblock": true, 00:25:25.516 "num_base_bdevs": 2, 00:25:25.516 "num_base_bdevs_discovered": 2, 00:25:25.516 "num_base_bdevs_operational": 2, 00:25:25.516 "process": { 00:25:25.516 "type": "rebuild", 00:25:25.516 "target": "spare", 00:25:25.516 "progress": { 00:25:25.516 "blocks": 2304, 00:25:25.516 "percent": 29 00:25:25.516 } 00:25:25.516 }, 00:25:25.516 "base_bdevs_list": [ 00:25:25.516 { 00:25:25.516 "name": "spare", 00:25:25.516 "uuid": "165d5122-ea34-5fae-b71f-31972b653deb", 00:25:25.516 "is_configured": true, 00:25:25.516 "data_offset": 256, 00:25:25.516 "data_size": 7936 00:25:25.516 }, 00:25:25.516 { 00:25:25.516 "name": "BaseBdev2", 00:25:25.516 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:25.516 "is_configured": true, 00:25:25.516 "data_offset": 256, 00:25:25.516 "data_size": 7936 00:25:25.516 } 00:25:25.516 ] 00:25:25.516 }' 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:25.516 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:25.775 [2024-12-06 13:20:12.571584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:25.775 [2024-12-06 13:20:12.621739] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:25.775 [2024-12-06 13:20:12.622027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.775 [2024-12-06 13:20:12.622065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:25.775 [2024-12-06 13:20:12.622079] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:25.775 "name": "raid_bdev1", 00:25:25.775 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:25.775 "strip_size_kb": 0, 00:25:25.775 "state": "online", 00:25:25.775 "raid_level": "raid1", 00:25:25.775 "superblock": true, 00:25:25.775 "num_base_bdevs": 2, 00:25:25.775 "num_base_bdevs_discovered": 1, 00:25:25.775 "num_base_bdevs_operational": 1, 00:25:25.775 "base_bdevs_list": [ 00:25:25.775 { 00:25:25.775 "name": null, 00:25:25.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.775 "is_configured": false, 00:25:25.775 "data_offset": 0, 00:25:25.775 "data_size": 7936 00:25:25.775 }, 00:25:25.775 { 00:25:25.775 "name": "BaseBdev2", 00:25:25.775 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:25.775 "is_configured": true, 00:25:25.775 "data_offset": 256, 00:25:25.775 "data_size": 7936 00:25:25.775 } 00:25:25.775 ] 00:25:25.775 }' 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:25.775 13:20:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:26.341 "name": "raid_bdev1", 00:25:26.341 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:26.341 "strip_size_kb": 0, 00:25:26.341 "state": "online", 00:25:26.341 "raid_level": "raid1", 00:25:26.341 "superblock": true, 00:25:26.341 "num_base_bdevs": 2, 00:25:26.341 "num_base_bdevs_discovered": 1, 00:25:26.341 "num_base_bdevs_operational": 1, 00:25:26.341 "base_bdevs_list": [ 00:25:26.341 { 00:25:26.341 "name": null, 00:25:26.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.341 "is_configured": false, 00:25:26.341 "data_offset": 0, 00:25:26.341 "data_size": 7936 00:25:26.341 }, 00:25:26.341 { 00:25:26.341 "name": "BaseBdev2", 00:25:26.341 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:26.341 "is_configured": true, 00:25:26.341 "data_offset": 256, 00:25:26.341 "data_size": 7936 00:25:26.341 } 00:25:26.341 ] 00:25:26.341 }' 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:26.341 [2024-12-06 13:20:13.311810] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:26.341 [2024-12-06 13:20:13.311934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.341 [2024-12-06 13:20:13.312044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:26.341 [2024-12-06 13:20:13.312069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.341 [2024-12-06 13:20:13.312345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.341 [2024-12-06 13:20:13.312370] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:26.341 [2024-12-06 13:20:13.312450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:26.341 [2024-12-06 13:20:13.312489] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:26.341 [2024-12-06 13:20:13.312506] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:26.341 [2024-12-06 13:20:13.312521] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:25:26.341 BaseBdev1 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.341 13:20:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:27.718 "name": "raid_bdev1", 00:25:27.718 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:27.718 "strip_size_kb": 0, 00:25:27.718 "state": "online", 00:25:27.718 "raid_level": "raid1", 00:25:27.718 "superblock": true, 00:25:27.718 "num_base_bdevs": 2, 00:25:27.718 "num_base_bdevs_discovered": 1, 00:25:27.718 "num_base_bdevs_operational": 1, 00:25:27.718 "base_bdevs_list": [ 00:25:27.718 { 00:25:27.718 "name": null, 00:25:27.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.718 "is_configured": false, 00:25:27.718 "data_offset": 0, 00:25:27.718 "data_size": 7936 00:25:27.718 }, 00:25:27.718 { 00:25:27.718 "name": "BaseBdev2", 00:25:27.718 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:27.718 "is_configured": true, 00:25:27.718 "data_offset": 256, 00:25:27.718 "data_size": 7936 00:25:27.718 } 00:25:27.718 ] 00:25:27.718 }' 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:27.718 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:27.977 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:27.977 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:27.977 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:27.977 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:27.977 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:27.977 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.977 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.977 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.977 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:27.977 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.977 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:27.977 "name": "raid_bdev1", 00:25:27.977 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:27.977 "strip_size_kb": 0, 00:25:27.977 "state": "online", 00:25:27.977 "raid_level": "raid1", 00:25:27.977 "superblock": true, 00:25:27.977 "num_base_bdevs": 2, 00:25:27.977 "num_base_bdevs_discovered": 1, 00:25:27.977 "num_base_bdevs_operational": 1, 00:25:27.977 "base_bdevs_list": [ 00:25:27.977 { 00:25:27.977 "name": null, 00:25:27.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.977 "is_configured": false, 00:25:27.977 "data_offset": 0, 00:25:27.977 "data_size": 7936 00:25:27.977 }, 00:25:27.978 { 00:25:27.978 "name": "BaseBdev2", 00:25:27.978 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:27.978 "is_configured": true, 00:25:27.978 "data_offset": 256, 00:25:27.978 "data_size": 7936 00:25:27.978 } 00:25:27.978 ] 00:25:27.978 }' 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.978 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:28.236 [2024-12-06 13:20:14.992329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:28.236 [2024-12-06 13:20:14.992729] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:25:28.236 [2024-12-06 13:20:14.992765] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:28.236 request: 00:25:28.236 { 00:25:28.236 "base_bdev": "BaseBdev1", 00:25:28.236 "raid_bdev": "raid_bdev1", 00:25:28.236 "method": "bdev_raid_add_base_bdev", 00:25:28.236 "req_id": 1 00:25:28.236 } 00:25:28.236 Got JSON-RPC error response 00:25:28.236 response: 00:25:28.236 { 00:25:28.236 "code": -22, 00:25:28.236 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:28.236 } 00:25:28.236 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:28.236 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:25:28.236 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:28.236 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:28.236 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:28.236 13:20:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.169 "name": "raid_bdev1", 00:25:29.169 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:29.169 "strip_size_kb": 0, 00:25:29.169 "state": "online", 00:25:29.169 "raid_level": "raid1", 00:25:29.169 "superblock": true, 00:25:29.169 "num_base_bdevs": 2, 00:25:29.169 "num_base_bdevs_discovered": 1, 00:25:29.169 "num_base_bdevs_operational": 1, 00:25:29.169 "base_bdevs_list": [ 00:25:29.169 { 00:25:29.169 "name": null, 00:25:29.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.169 "is_configured": false, 00:25:29.169 "data_offset": 0, 00:25:29.169 "data_size": 7936 00:25:29.169 }, 00:25:29.169 { 00:25:29.169 "name": "BaseBdev2", 00:25:29.169 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:29.169 "is_configured": true, 00:25:29.169 "data_offset": 256, 00:25:29.169 "data_size": 7936 00:25:29.169 } 00:25:29.169 ] 00:25:29.169 }' 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.169 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:25:29.735 "name": "raid_bdev1", 00:25:29.735 "uuid": "6bd3c2e6-62be-4684-ae29-6687f9a903c8", 00:25:29.735 "strip_size_kb": 0, 00:25:29.735 "state": "online", 00:25:29.735 "raid_level": "raid1", 00:25:29.735 "superblock": true, 00:25:29.735 "num_base_bdevs": 2, 00:25:29.735 "num_base_bdevs_discovered": 1, 00:25:29.735 "num_base_bdevs_operational": 1, 00:25:29.735 "base_bdevs_list": [ 00:25:29.735 { 00:25:29.735 "name": null, 00:25:29.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.735 "is_configured": false, 00:25:29.735 "data_offset": 0, 00:25:29.735 "data_size": 7936 00:25:29.735 }, 00:25:29.735 { 00:25:29.735 "name": "BaseBdev2", 00:25:29.735 "uuid": "cfd5c672-c4a6-5549-9d23-022a2b358ff1", 00:25:29.735 "is_configured": true, 00:25:29.735 "data_offset": 256, 00:25:29.735 "data_size": 7936 00:25:29.735 } 00:25:29.735 ] 00:25:29.735 }' 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89910 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89910 ']' 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89910 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89910 00:25:29.735 killing process with pid 89910 00:25:29.735 Received shutdown signal, test time was about 60.000000 seconds 00:25:29.735 00:25:29.735 Latency(us) 00:25:29.735 [2024-12-06T13:20:16.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.735 [2024-12-06T13:20:16.751Z] =================================================================================================================== 00:25:29.735 [2024-12-06T13:20:16.751Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89910' 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89910 00:25:29.735 [2024-12-06 13:20:16.691816] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:29.735 13:20:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89910 00:25:29.735 [2024-12-06 13:20:16.692001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:29.735 [2024-12-06 13:20:16.692080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:29.735 [2024-12-06 13:20:16.692101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:29.993 [2024-12-06 13:20:16.978695] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:31.372 13:20:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:25:31.372 00:25:31.372 real 0m18.537s 00:25:31.372 user 0m25.104s 00:25:31.372 sys 0m1.470s 00:25:31.372 13:20:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.372 13:20:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:25:31.372 ************************************ 00:25:31.372 END TEST raid_rebuild_test_sb_md_interleaved 00:25:31.372 ************************************ 00:25:31.372 13:20:18 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:25:31.372 13:20:18 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:25:31.372 13:20:18 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89910 ']' 00:25:31.372 13:20:18 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89910 00:25:31.372 13:20:18 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:25:31.372 ************************************ 00:25:31.372 END TEST bdev_raid 00:25:31.372 ************************************ 00:25:31.372 00:25:31.372 real 13m19.172s 00:25:31.372 user 18m38.895s 00:25:31.372 sys 1m53.659s 00:25:31.372 13:20:18 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.372 13:20:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:31.372 13:20:18 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:31.372 13:20:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:31.372 13:20:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.372 13:20:18 -- common/autotest_common.sh@10 -- # set +x 00:25:31.372 ************************************ 00:25:31.372 START TEST spdkcli_raid 00:25:31.372 ************************************ 00:25:31.372 13:20:18 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:31.372 * Looking for test storage... 00:25:31.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:31.372 13:20:18 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:31.372 13:20:18 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:25:31.372 13:20:18 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:31.631 13:20:18 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.631 13:20:18 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:25:31.631 13:20:18 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.631 13:20:18 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.631 --rc genhtml_branch_coverage=1 00:25:31.631 --rc genhtml_function_coverage=1 00:25:31.631 --rc genhtml_legend=1 00:25:31.631 --rc geninfo_all_blocks=1 00:25:31.631 --rc geninfo_unexecuted_blocks=1 00:25:31.631 00:25:31.631 ' 00:25:31.631 13:20:18 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.631 --rc genhtml_branch_coverage=1 00:25:31.631 --rc genhtml_function_coverage=1 00:25:31.631 --rc genhtml_legend=1 00:25:31.631 --rc geninfo_all_blocks=1 00:25:31.631 --rc geninfo_unexecuted_blocks=1 00:25:31.631 00:25:31.631 ' 00:25:31.631 13:20:18 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.631 --rc genhtml_branch_coverage=1 00:25:31.632 --rc genhtml_function_coverage=1 00:25:31.632 --rc genhtml_legend=1 00:25:31.632 --rc geninfo_all_blocks=1 00:25:31.632 --rc geninfo_unexecuted_blocks=1 00:25:31.632 00:25:31.632 ' 00:25:31.632 13:20:18 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:31.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.632 --rc genhtml_branch_coverage=1 00:25:31.632 --rc genhtml_function_coverage=1 00:25:31.632 --rc genhtml_legend=1 00:25:31.632 --rc geninfo_all_blocks=1 00:25:31.632 --rc geninfo_unexecuted_blocks=1 00:25:31.632 00:25:31.632 ' 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:25:31.632 13:20:18 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:25:31.632 13:20:18 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.632 13:20:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90587 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90587 00:25:31.632 13:20:18 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:25:31.632 13:20:18 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90587 ']' 00:25:31.632 13:20:18 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.632 13:20:18 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.632 13:20:18 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.632 13:20:18 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.632 13:20:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:31.632 [2024-12-06 13:20:18.564804] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:25:31.632 [2024-12-06 13:20:18.565153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90587 ] 00:25:31.918 [2024-12-06 13:20:18.739923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:31.918 [2024-12-06 13:20:18.887247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.918 [2024-12-06 13:20:18.887253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.913 13:20:19 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.913 13:20:19 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:25:32.913 13:20:19 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:25:32.913 13:20:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.913 13:20:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:32.913 13:20:19 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:25:32.913 13:20:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:32.913 13:20:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:32.913 13:20:19 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:32.913 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:32.913 ' 00:25:34.812 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:25:34.812 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:25:34.812 13:20:21 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:25:34.812 13:20:21 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:34.812 13:20:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:34.812 13:20:21 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:25:34.812 13:20:21 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:34.812 13:20:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:34.812 13:20:21 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:25:34.812 ' 00:25:35.743 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:25:36.000 13:20:22 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:25:36.000 13:20:22 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:36.000 13:20:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:36.000 13:20:22 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:25:36.000 13:20:22 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:36.000 13:20:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:36.000 13:20:22 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:25:36.000 13:20:22 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:25:36.567 13:20:23 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:25:36.567 13:20:23 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:25:36.567 13:20:23 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:25:36.567 13:20:23 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:36.567 13:20:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:36.567 13:20:23 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:25:36.567 13:20:23 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:36.567 13:20:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:36.567 13:20:23 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:25:36.567 ' 00:25:37.943 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:25:37.943 13:20:24 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:25:37.943 13:20:24 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:37.943 13:20:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:37.943 13:20:24 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:25:37.943 13:20:24 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.943 13:20:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:37.943 13:20:24 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:25:37.943 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:25:37.943 ' 00:25:39.331 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:25:39.331 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:25:39.331 13:20:26 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:39.331 13:20:26 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90587 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90587 ']' 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90587 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90587 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90587' 00:25:39.331 killing process with pid 90587 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90587 00:25:39.331 13:20:26 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90587 00:25:41.882 13:20:28 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:25:41.882 13:20:28 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90587 ']' 00:25:41.882 13:20:28 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90587 00:25:41.882 Process with pid 90587 is not found 00:25:41.882 13:20:28 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90587 ']' 00:25:41.882 13:20:28 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90587 00:25:41.882 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90587) - No such process 00:25:41.882 13:20:28 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90587 is not found' 00:25:41.882 13:20:28 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:25:41.882 13:20:28 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:41.882 13:20:28 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:41.882 13:20:28 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:41.882 ************************************ 00:25:41.882 END TEST spdkcli_raid 00:25:41.882 ************************************ 00:25:41.882 00:25:41.882 real 0m10.446s 00:25:41.882 user 0m21.586s 00:25:41.882 sys 0m1.225s 00:25:41.882 13:20:28 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:41.882 13:20:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:41.882 13:20:28 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:25:41.882 13:20:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:41.882 13:20:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:41.882 13:20:28 -- common/autotest_common.sh@10 -- # set +x 00:25:41.882 ************************************ 00:25:41.882 START TEST blockdev_raid5f 00:25:41.882 ************************************ 00:25:41.882 13:20:28 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:25:41.882 * Looking for test storage... 00:25:41.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:25:41.882 13:20:28 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:41.882 13:20:28 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:25:41.882 13:20:28 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.142 13:20:28 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:42.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.142 --rc genhtml_branch_coverage=1 00:25:42.142 --rc genhtml_function_coverage=1 00:25:42.142 --rc genhtml_legend=1 00:25:42.142 --rc geninfo_all_blocks=1 00:25:42.142 --rc geninfo_unexecuted_blocks=1 00:25:42.142 00:25:42.142 ' 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:42.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.142 --rc genhtml_branch_coverage=1 00:25:42.142 --rc genhtml_function_coverage=1 00:25:42.142 --rc genhtml_legend=1 00:25:42.142 --rc geninfo_all_blocks=1 00:25:42.142 --rc geninfo_unexecuted_blocks=1 00:25:42.142 00:25:42.142 ' 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:42.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.142 --rc genhtml_branch_coverage=1 00:25:42.142 --rc genhtml_function_coverage=1 00:25:42.142 --rc genhtml_legend=1 00:25:42.142 --rc geninfo_all_blocks=1 00:25:42.142 --rc geninfo_unexecuted_blocks=1 00:25:42.142 00:25:42.142 ' 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:42.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.142 --rc genhtml_branch_coverage=1 00:25:42.142 --rc genhtml_function_coverage=1 00:25:42.142 --rc genhtml_legend=1 00:25:42.142 --rc geninfo_all_blocks=1 00:25:42.142 --rc geninfo_unexecuted_blocks=1 00:25:42.142 00:25:42.142 ' 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90867 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90867 00:25:42.142 13:20:28 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90867 ']' 00:25:42.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:42.142 13:20:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:42.142 [2024-12-06 13:20:29.057892] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:25:42.142 [2024-12-06 13:20:29.058065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90867 ] 00:25:42.401 [2024-12-06 13:20:29.242521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.401 [2024-12-06 13:20:29.385046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.340 13:20:30 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.340 13:20:30 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:25:43.340 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:25:43.340 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:25:43.340 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:25:43.340 13:20:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.340 13:20:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.599 Malloc0 00:25:43.599 Malloc1 00:25:43.599 Malloc2 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.599 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.599 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:25:43.599 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.599 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.599 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.599 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:25:43.599 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:25:43.599 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.599 13:20:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:43.858 13:20:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.858 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:25:43.858 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:25:43.858 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "92082432-8cfc-4e47-b8ab-0c85c79ff0e7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "92082432-8cfc-4e47-b8ab-0c85c79ff0e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "92082432-8cfc-4e47-b8ab-0c85c79ff0e7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "539798d8-1c8a-4e5f-82ad-fc81983bd5c9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "0e93e1c2-aeaf-4d1f-a662-56088a5ec80a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c7afbab7-6d9a-4871-80c9-e42837a60649",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:25:43.858 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:25:43.858 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:25:43.858 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:25:43.858 13:20:30 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90867 00:25:43.858 13:20:30 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90867 ']' 00:25:43.858 13:20:30 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90867 00:25:43.858 13:20:30 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:25:43.858 13:20:30 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.858 13:20:30 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90867 00:25:43.858 killing process with pid 90867 00:25:43.858 13:20:30 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.858 13:20:30 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.858 13:20:30 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90867' 00:25:43.858 13:20:30 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90867 00:25:43.858 13:20:30 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90867 00:25:46.392 13:20:33 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:46.392 13:20:33 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:25:46.392 13:20:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:25:46.392 13:20:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.392 13:20:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:46.392 ************************************ 00:25:46.392 START TEST bdev_hello_world 00:25:46.392 ************************************ 00:25:46.392 13:20:33 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:25:46.651 [2024-12-06 13:20:33.486978] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:25:46.651 [2024-12-06 13:20:33.487196] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90935 ] 00:25:46.909 [2024-12-06 13:20:33.673566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.909 [2024-12-06 13:20:33.817111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.477 [2024-12-06 13:20:34.399737] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:47.477 [2024-12-06 13:20:34.399812] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:25:47.477 [2024-12-06 13:20:34.399855] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:47.477 [2024-12-06 13:20:34.400435] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:47.477 [2024-12-06 13:20:34.400646] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:47.477 [2024-12-06 13:20:34.400677] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:47.477 [2024-12-06 13:20:34.400773] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:47.477 00:25:47.477 [2024-12-06 13:20:34.400803] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:48.850 ************************************ 00:25:48.850 END TEST bdev_hello_world 00:25:48.850 ************************************ 00:25:48.850 00:25:48.850 real 0m2.411s 00:25:48.850 user 0m1.937s 00:25:48.850 sys 0m0.349s 00:25:48.850 13:20:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.850 13:20:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 13:20:35 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:25:48.850 13:20:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:48.850 13:20:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:48.850 13:20:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 ************************************ 00:25:48.850 START TEST bdev_bounds 00:25:48.850 ************************************ 00:25:48.850 Process bdevio pid: 90977 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90977 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90977' 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90977 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90977 ']' 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:48.850 13:20:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:49.108 [2024-12-06 13:20:35.959655] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:25:49.108 [2024-12-06 13:20:35.960112] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90977 ] 00:25:49.367 [2024-12-06 13:20:36.145110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:49.367 [2024-12-06 13:20:36.350552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.367 [2024-12-06 13:20:36.350685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.367 [2024-12-06 13:20:36.350695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.301 13:20:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.301 13:20:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:25:50.301 13:20:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:50.301 I/O targets: 00:25:50.301 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:25:50.301 00:25:50.301 00:25:50.301 CUnit - A unit testing framework for C - Version 2.1-3 00:25:50.301 http://cunit.sourceforge.net/ 00:25:50.301 00:25:50.301 00:25:50.301 Suite: bdevio tests on: raid5f 00:25:50.301 Test: blockdev write read block ...passed 00:25:50.301 Test: blockdev write zeroes read block ...passed 00:25:50.301 Test: blockdev write zeroes read no split ...passed 00:25:50.301 Test: blockdev write zeroes read split ...passed 00:25:50.559 Test: blockdev write zeroes read split partial ...passed 00:25:50.559 Test: blockdev reset ...passed 00:25:50.559 Test: blockdev write read 8 blocks ...passed 00:25:50.559 Test: blockdev write read size > 128k ...passed 00:25:50.559 Test: blockdev write read invalid size ...passed 00:25:50.559 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:50.559 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:50.559 Test: blockdev write read max offset ...passed 00:25:50.559 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:50.559 Test: blockdev writev readv 8 blocks ...passed 00:25:50.559 Test: blockdev writev readv 30 x 1block ...passed 00:25:50.559 Test: blockdev writev readv block ...passed 00:25:50.559 Test: blockdev writev readv size > 128k ...passed 00:25:50.559 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:50.559 Test: blockdev comparev and writev ...passed 00:25:50.559 Test: blockdev nvme passthru rw ...passed 00:25:50.559 Test: blockdev nvme passthru vendor specific ...passed 00:25:50.559 Test: blockdev nvme admin passthru ...passed 00:25:50.559 Test: blockdev copy ...passed 00:25:50.559 00:25:50.559 Run Summary: Type Total Ran Passed Failed Inactive 00:25:50.559 suites 1 1 n/a 0 0 00:25:50.559 tests 23 23 23 0 0 00:25:50.559 asserts 130 130 130 0 n/a 00:25:50.559 00:25:50.559 Elapsed time = 0.596 seconds 00:25:50.559 0 00:25:50.559 13:20:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90977 00:25:50.559 13:20:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90977 ']' 00:25:50.559 13:20:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90977 00:25:50.559 13:20:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:25:50.559 13:20:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.559 13:20:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90977 00:25:50.559 13:20:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:50.559 13:20:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:50.559 13:20:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90977' 00:25:50.559 killing process with pid 90977 00:25:50.559 13:20:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90977 00:25:50.559 13:20:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90977 00:25:51.934 13:20:38 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:25:51.934 00:25:51.934 real 0m3.033s 00:25:51.934 user 0m7.350s 00:25:51.934 sys 0m0.508s 00:25:51.934 13:20:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:51.934 13:20:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:51.934 ************************************ 00:25:51.934 END TEST bdev_bounds 00:25:51.934 ************************************ 00:25:51.934 13:20:38 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:25:51.934 13:20:38 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:51.934 13:20:38 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:51.934 13:20:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:51.934 ************************************ 00:25:51.934 START TEST bdev_nbd 00:25:51.934 ************************************ 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:25:51.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=91043 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 91043 /var/tmp/spdk-nbd.sock 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 91043 ']' 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:51.934 13:20:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:52.192 [2024-12-06 13:20:39.055913] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:25:52.192 [2024-12-06 13:20:39.056396] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.490 [2024-12-06 13:20:39.247651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.490 [2024-12-06 13:20:39.407987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.075 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.075 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:25:53.075 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:53.334 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:53.593 1+0 records in 00:25:53.593 1+0 records out 00:25:53.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360471 s, 11.4 MB/s 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:53.593 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:53.853 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:53.853 { 00:25:53.853 "nbd_device": "/dev/nbd0", 00:25:53.853 "bdev_name": "raid5f" 00:25:53.853 } 00:25:53.853 ]' 00:25:53.853 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:53.853 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:53.853 { 00:25:53.853 "nbd_device": "/dev/nbd0", 00:25:53.853 "bdev_name": "raid5f" 00:25:53.853 } 00:25:53.853 ]' 00:25:53.853 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:53.853 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:53.853 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:53.853 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:53.853 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:53.853 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:53.853 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:53.853 13:20:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:54.112 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:54.112 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:54.112 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:54.112 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:54.112 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:54.112 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:54.112 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:54.112 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:54.112 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:54.112 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:54.112 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:54.680 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:25:54.939 /dev/nbd0 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.939 1+0 records in 00:25:54.939 1+0 records out 00:25:54.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040893 s, 10.0 MB/s 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:54.939 13:20:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:55.198 { 00:25:55.198 "nbd_device": "/dev/nbd0", 00:25:55.198 "bdev_name": "raid5f" 00:25:55.198 } 00:25:55.198 ]' 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:55.198 { 00:25:55.198 "nbd_device": "/dev/nbd0", 00:25:55.198 "bdev_name": "raid5f" 00:25:55.198 } 00:25:55.198 ]' 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:55.198 256+0 records in 00:25:55.198 256+0 records out 00:25:55.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00756848 s, 139 MB/s 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:55.198 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:55.457 256+0 records in 00:25:55.457 256+0 records out 00:25:55.457 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0393628 s, 26.6 MB/s 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.457 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:55.715 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:55.715 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:55.715 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:55.716 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.716 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.716 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:55.716 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:55.716 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.716 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:55.716 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:55.716 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:25:55.974 13:20:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:25:56.542 malloc_lvol_verify 00:25:56.542 13:20:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:25:56.801 f5d3a785-cc20-4d01-9c44-4b095f7581a7 00:25:56.801 13:20:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:25:57.059 dfc7e4a9-8a88-4115-983f-4bd2304bc1f1 00:25:57.059 13:20:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:25:57.317 /dev/nbd0 00:25:57.317 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:25:57.317 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:25:57.317 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:25:57.317 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:25:57.317 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:25:57.317 mke2fs 1.47.0 (5-Feb-2023) 00:25:57.317 Discarding device blocks: 0/4096 done 00:25:57.317 Creating filesystem with 4096 1k blocks and 1024 inodes 00:25:57.317 00:25:57.317 Allocating group tables: 0/1 done 00:25:57.317 Writing inode tables: 0/1 done 00:25:57.317 Creating journal (1024 blocks): done 00:25:57.317 Writing superblocks and filesystem accounting information: 0/1 done 00:25:57.317 00:25:57.317 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:57.317 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:57.317 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:57.317 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:57.317 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:57.317 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:57.318 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 91043 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 91043 ']' 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 91043 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91043 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:57.577 killing process with pid 91043 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91043' 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 91043 00:25:57.577 13:20:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 91043 00:25:58.968 13:20:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:25:58.968 00:25:58.968 real 0m7.016s 00:25:58.968 user 0m10.046s 00:25:58.968 sys 0m1.520s 00:25:58.968 13:20:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:58.968 ************************************ 00:25:58.968 END TEST bdev_nbd 00:25:58.968 ************************************ 00:25:58.968 13:20:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:59.227 13:20:45 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:25:59.227 13:20:45 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:25:59.227 13:20:45 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:25:59.227 13:20:45 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:25:59.227 13:20:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:59.227 13:20:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.227 13:20:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:59.227 ************************************ 00:25:59.227 START TEST bdev_fio 00:25:59.227 ************************************ 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:25:59.227 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:25:59.227 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:25:59.228 ************************************ 00:25:59.228 START TEST bdev_fio_rw_verify 00:25:59.228 ************************************ 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:59.228 13:20:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:59.486 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:59.486 fio-3.35 00:25:59.486 Starting 1 thread 00:26:11.689 00:26:11.689 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91255: Fri Dec 6 13:20:57 2024 00:26:11.689 read: IOPS=7818, BW=30.5MiB/s (32.0MB/s)(305MiB/10001msec) 00:26:11.689 slat (usec): min=23, max=120, avg=32.74, stdev= 8.05 00:26:11.689 clat (usec): min=14, max=510, avg=203.65, stdev=79.50 00:26:11.689 lat (usec): min=45, max=543, avg=236.39, stdev=80.45 00:26:11.689 clat percentiles (usec): 00:26:11.689 | 50.000th=[ 202], 99.000th=[ 363], 99.900th=[ 416], 99.990th=[ 461], 00:26:11.689 | 99.999th=[ 510] 00:26:11.689 write: IOPS=8226, BW=32.1MiB/s (33.7MB/s)(318MiB/9885msec); 0 zone resets 00:26:11.689 slat (usec): min=11, max=1515, avg=24.65, stdev= 9.32 00:26:11.689 clat (usec): min=85, max=2110, avg=468.91, stdev=64.05 00:26:11.689 lat (usec): min=107, max=2132, avg=493.56, stdev=65.70 00:26:11.689 clat percentiles (usec): 00:26:11.689 | 50.000th=[ 474], 99.000th=[ 611], 99.900th=[ 693], 99.990th=[ 1090], 00:26:11.689 | 99.999th=[ 2114] 00:26:11.689 bw ( KiB/s): min=29432, max=34472, per=98.63%, avg=32456.00, stdev=1616.19, samples=19 00:26:11.689 iops : min= 7358, max= 8618, avg=8114.00, stdev=404.05, samples=19 00:26:11.689 lat (usec) : 20=0.01%, 50=0.01%, 100=5.32%, 250=28.05%, 500=50.12% 00:26:11.689 lat (usec) : 750=16.49%, 1000=0.02% 00:26:11.689 lat (msec) : 2=0.01%, 4=0.01% 00:26:11.689 cpu : usr=98.51%, sys=0.69%, ctx=21, majf=0, minf=6922 00:26:11.689 IO depths : 1=7.8%, 2=19.9%, 4=55.2%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:11.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.689 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.689 issued rwts: total=78189,81316,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.689 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:11.689 00:26:11.689 Run status group 0 (all jobs): 00:26:11.689 READ: bw=30.5MiB/s (32.0MB/s), 30.5MiB/s-30.5MiB/s (32.0MB/s-32.0MB/s), io=305MiB (320MB), run=10001-10001msec 00:26:11.689 WRITE: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=318MiB (333MB), run=9885-9885msec 00:26:12.256 ----------------------------------------------------- 00:26:12.256 Suppressions used: 00:26:12.256 count bytes template 00:26:12.256 1 7 /usr/src/fio/parse.c 00:26:12.256 678 65088 /usr/src/fio/iolog.c 00:26:12.256 1 8 libtcmalloc_minimal.so 00:26:12.256 1 904 libcrypto.so 00:26:12.256 ----------------------------------------------------- 00:26:12.256 00:26:12.256 00:26:12.256 real 0m12.995s 00:26:12.256 user 0m13.312s 00:26:12.256 sys 0m0.870s 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:26:12.256 ************************************ 00:26:12.256 END TEST bdev_fio_rw_verify 00:26:12.256 ************************************ 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "92082432-8cfc-4e47-b8ab-0c85c79ff0e7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "92082432-8cfc-4e47-b8ab-0c85c79ff0e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "92082432-8cfc-4e47-b8ab-0c85c79ff0e7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "539798d8-1c8a-4e5f-82ad-fc81983bd5c9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "0e93e1c2-aeaf-4d1f-a662-56088a5ec80a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c7afbab7-6d9a-4871-80c9-e42837a60649",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:12.256 /home/vagrant/spdk_repo/spdk 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:26:12.256 00:26:12.256 real 0m13.226s 00:26:12.256 user 0m13.418s 00:26:12.256 sys 0m0.973s 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.256 13:20:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:12.256 ************************************ 00:26:12.256 END TEST bdev_fio 00:26:12.256 ************************************ 00:26:12.515 13:20:59 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:12.515 13:20:59 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:12.515 13:20:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:26:12.515 13:20:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.515 13:20:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:12.515 ************************************ 00:26:12.515 START TEST bdev_verify 00:26:12.515 ************************************ 00:26:12.515 13:20:59 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:12.515 [2024-12-06 13:20:59.390377] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:26:12.515 [2024-12-06 13:20:59.390583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91418 ] 00:26:12.774 [2024-12-06 13:20:59.567517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:12.774 [2024-12-06 13:20:59.716618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.774 [2024-12-06 13:20:59.716642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.341 Running I/O for 5 seconds... 00:26:15.703 9593.00 IOPS, 37.47 MiB/s [2024-12-06T13:21:03.655Z] 9199.50 IOPS, 35.94 MiB/s [2024-12-06T13:21:04.592Z] 10168.67 IOPS, 39.72 MiB/s [2024-12-06T13:21:05.528Z] 10701.75 IOPS, 41.80 MiB/s [2024-12-06T13:21:05.528Z] 11071.60 IOPS, 43.25 MiB/s 00:26:18.512 Latency(us) 00:26:18.512 [2024-12-06T13:21:05.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.512 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:18.512 Verification LBA range: start 0x0 length 0x2000 00:26:18.512 raid5f : 5.01 5593.77 21.85 0.00 0.00 34621.82 2115.03 29074.15 00:26:18.512 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:18.512 Verification LBA range: start 0x2000 length 0x2000 00:26:18.512 raid5f : 5.02 5454.17 21.31 0.00 0.00 35374.45 286.72 31695.59 00:26:18.512 [2024-12-06T13:21:05.528Z] =================================================================================================================== 00:26:18.512 [2024-12-06T13:21:05.528Z] Total : 11047.94 43.16 0.00 0.00 34993.53 286.72 31695.59 00:26:19.888 00:26:19.888 real 0m7.396s 00:26:19.888 user 0m13.532s 00:26:19.888 sys 0m0.365s 00:26:19.888 13:21:06 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.888 13:21:06 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 ************************************ 00:26:19.888 END TEST bdev_verify 00:26:19.888 ************************************ 00:26:19.888 13:21:06 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:19.888 13:21:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:26:19.888 13:21:06 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.888 13:21:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:19.888 ************************************ 00:26:19.888 START TEST bdev_verify_big_io 00:26:19.888 ************************************ 00:26:19.888 13:21:06 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:19.888 [2024-12-06 13:21:06.831975] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:26:19.888 [2024-12-06 13:21:06.832210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91511 ] 00:26:20.147 [2024-12-06 13:21:07.003204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:20.148 [2024-12-06 13:21:07.153126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.148 [2024-12-06 13:21:07.153127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.716 Running I/O for 5 seconds... 00:26:23.029 506.00 IOPS, 31.62 MiB/s [2024-12-06T13:21:11.125Z] 696.00 IOPS, 43.50 MiB/s [2024-12-06T13:21:12.099Z] 739.00 IOPS, 46.19 MiB/s [2024-12-06T13:21:13.033Z] 761.00 IOPS, 47.56 MiB/s [2024-12-06T13:21:13.033Z] 774.00 IOPS, 48.38 MiB/s 00:26:26.017 Latency(us) 00:26:26.017 [2024-12-06T13:21:13.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.017 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:26.017 Verification LBA range: start 0x0 length 0x200 00:26:26.017 raid5f : 5.24 399.36 24.96 0.00 0.00 7883263.00 176.87 373674.36 00:26:26.017 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:26.017 Verification LBA range: start 0x200 length 0x200 00:26:26.017 raid5f : 5.22 389.04 24.31 0.00 0.00 8194357.77 187.11 383206.87 00:26:26.017 [2024-12-06T13:21:13.033Z] =================================================================================================================== 00:26:26.017 [2024-12-06T13:21:13.033Z] Total : 788.40 49.27 0.00 0.00 8036473.03 176.87 383206.87 00:26:27.391 00:26:27.391 real 0m7.597s 00:26:27.391 user 0m13.894s 00:26:27.391 sys 0m0.378s 00:26:27.391 13:21:14 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.391 13:21:14 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:26:27.391 ************************************ 00:26:27.391 END TEST bdev_verify_big_io 00:26:27.391 ************************************ 00:26:27.391 13:21:14 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:27.391 13:21:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:26:27.391 13:21:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.391 13:21:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:27.391 ************************************ 00:26:27.391 START TEST bdev_write_zeroes 00:26:27.391 ************************************ 00:26:27.391 13:21:14 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:27.647 [2024-12-06 13:21:14.494588] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:26:27.647 [2024-12-06 13:21:14.495030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91614 ] 00:26:27.904 [2024-12-06 13:21:14.680994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.904 [2024-12-06 13:21:14.821464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.469 Running I/O for 1 seconds... 00:26:29.402 21783.00 IOPS, 85.09 MiB/s 00:26:29.402 Latency(us) 00:26:29.402 [2024-12-06T13:21:16.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.402 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:29.402 raid5f : 1.01 21752.27 84.97 0.00 0.00 5862.05 2204.39 7864.32 00:26:29.402 [2024-12-06T13:21:16.418Z] =================================================================================================================== 00:26:29.402 [2024-12-06T13:21:16.418Z] Total : 21752.27 84.97 0.00 0.00 5862.05 2204.39 7864.32 00:26:31.301 00:26:31.301 real 0m3.422s 00:26:31.301 user 0m2.895s 00:26:31.301 sys 0m0.391s 00:26:31.301 13:21:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:31.301 13:21:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:26:31.301 ************************************ 00:26:31.301 END TEST bdev_write_zeroes 00:26:31.301 ************************************ 00:26:31.301 13:21:17 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:31.301 13:21:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:26:31.301 13:21:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:31.301 13:21:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:31.301 ************************************ 00:26:31.301 START TEST bdev_json_nonenclosed 00:26:31.301 ************************************ 00:26:31.301 13:21:17 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:31.301 [2024-12-06 13:21:17.959864] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:26:31.301 [2024-12-06 13:21:17.960293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91663 ] 00:26:31.301 [2024-12-06 13:21:18.140256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.301 [2024-12-06 13:21:18.293795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.301 [2024-12-06 13:21:18.293929] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:31.301 [2024-12-06 13:21:18.293970] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:31.301 [2024-12-06 13:21:18.293985] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:31.868 00:26:31.868 real 0m0.716s 00:26:31.868 user 0m0.457s 00:26:31.868 sys 0m0.152s 00:26:31.868 ************************************ 00:26:31.868 END TEST bdev_json_nonenclosed 00:26:31.868 ************************************ 00:26:31.868 13:21:18 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:31.868 13:21:18 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:26:31.868 13:21:18 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:31.868 13:21:18 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:26:31.868 13:21:18 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:31.868 13:21:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:31.868 ************************************ 00:26:31.868 START TEST bdev_json_nonarray 00:26:31.868 ************************************ 00:26:31.868 13:21:18 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:31.868 [2024-12-06 13:21:18.739163] Starting SPDK v25.01-pre git sha1 e9db16374 / DPDK 24.03.0 initialization... 00:26:31.868 [2024-12-06 13:21:18.739333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91694 ] 00:26:32.127 [2024-12-06 13:21:18.918205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.127 [2024-12-06 13:21:19.062598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.127 [2024-12-06 13:21:19.062798] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:32.127 [2024-12-06 13:21:19.062831] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:32.127 [2024-12-06 13:21:19.062862] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:32.385 ************************************ 00:26:32.385 END TEST bdev_json_nonarray 00:26:32.385 ************************************ 00:26:32.385 00:26:32.385 real 0m0.702s 00:26:32.385 user 0m0.463s 00:26:32.385 sys 0m0.133s 00:26:32.385 13:21:19 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.385 13:21:19 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:26:32.385 13:21:19 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:26:32.385 13:21:19 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:26:32.385 13:21:19 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:26:32.385 13:21:19 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:26:32.385 13:21:19 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:26:32.385 13:21:19 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:26:32.385 13:21:19 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:32.385 13:21:19 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:26:32.385 13:21:19 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:26:32.385 13:21:19 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:26:32.386 13:21:19 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:26:32.386 00:26:32.386 real 0m50.652s 00:26:32.386 user 1m8.616s 00:26:32.386 sys 0m5.908s 00:26:32.386 13:21:19 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.386 13:21:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:26:32.386 ************************************ 00:26:32.386 END TEST blockdev_raid5f 00:26:32.386 ************************************ 00:26:32.644 13:21:19 -- spdk/autotest.sh@194 -- # uname -s 00:26:32.644 13:21:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:26:32.644 13:21:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:26:32.644 13:21:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:26:32.644 13:21:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@260 -- # timing_exit lib 00:26:32.644 13:21:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.644 13:21:19 -- common/autotest_common.sh@10 -- # set +x 00:26:32.644 13:21:19 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:32.644 13:21:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:32.644 13:21:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:32.644 13:21:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:32.644 13:21:19 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:26:32.644 13:21:19 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:26:32.644 13:21:19 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:26:32.644 13:21:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.644 13:21:19 -- common/autotest_common.sh@10 -- # set +x 00:26:32.644 13:21:19 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:26:32.644 13:21:19 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:26:32.644 13:21:19 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:26:32.644 13:21:19 -- common/autotest_common.sh@10 -- # set +x 00:26:34.545 INFO: APP EXITING 00:26:34.545 INFO: killing all VMs 00:26:34.545 INFO: killing vhost app 00:26:34.545 INFO: EXIT DONE 00:26:34.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:34.545 Waiting for block devices as requested 00:26:34.545 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:34.803 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:35.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:35.772 Cleaning 00:26:35.772 Removing: /var/run/dpdk/spdk0/config 00:26:35.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:35.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:35.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:35.772 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:35.772 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:35.772 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:35.772 Removing: /dev/shm/spdk_tgt_trace.pid56991 00:26:35.772 Removing: /var/run/dpdk/spdk0 00:26:35.772 Removing: /var/run/dpdk/spdk_pid56750 00:26:35.772 Removing: /var/run/dpdk/spdk_pid56991 00:26:35.772 Removing: /var/run/dpdk/spdk_pid57220 00:26:35.772 Removing: /var/run/dpdk/spdk_pid57324 00:26:35.772 Removing: /var/run/dpdk/spdk_pid57380 00:26:35.772 Removing: /var/run/dpdk/spdk_pid57508 00:26:35.772 Removing: /var/run/dpdk/spdk_pid57537 00:26:35.772 Removing: /var/run/dpdk/spdk_pid57747 00:26:35.772 Removing: /var/run/dpdk/spdk_pid57853 00:26:35.772 Removing: /var/run/dpdk/spdk_pid57971 00:26:35.772 Removing: /var/run/dpdk/spdk_pid58093 00:26:35.772 Removing: /var/run/dpdk/spdk_pid58201 00:26:35.772 Removing: /var/run/dpdk/spdk_pid58246 00:26:35.772 Removing: /var/run/dpdk/spdk_pid58288 00:26:35.772 Removing: /var/run/dpdk/spdk_pid58359 00:26:35.772 Removing: /var/run/dpdk/spdk_pid58470 00:26:35.772 Removing: /var/run/dpdk/spdk_pid58952 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59038 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59112 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59128 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59287 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59304 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59460 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59481 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59551 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59569 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59633 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59662 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59857 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59894 00:26:35.772 Removing: /var/run/dpdk/spdk_pid59977 00:26:35.772 Removing: /var/run/dpdk/spdk_pid61381 00:26:35.772 Removing: /var/run/dpdk/spdk_pid61598 00:26:35.772 Removing: /var/run/dpdk/spdk_pid61738 00:26:35.772 Removing: /var/run/dpdk/spdk_pid62398 00:26:35.772 Removing: /var/run/dpdk/spdk_pid62615 00:26:35.772 Removing: /var/run/dpdk/spdk_pid62766 00:26:35.772 Removing: /var/run/dpdk/spdk_pid63415 00:26:35.772 Removing: /var/run/dpdk/spdk_pid63756 00:26:35.772 Removing: /var/run/dpdk/spdk_pid63902 00:26:35.772 Removing: /var/run/dpdk/spdk_pid65309 00:26:35.772 Removing: /var/run/dpdk/spdk_pid65574 00:26:35.772 Removing: /var/run/dpdk/spdk_pid65720 00:26:35.772 Removing: /var/run/dpdk/spdk_pid67142 00:26:35.772 Removing: /var/run/dpdk/spdk_pid67403 00:26:35.772 Removing: /var/run/dpdk/spdk_pid67549 00:26:35.772 Removing: /var/run/dpdk/spdk_pid68964 00:26:35.772 Removing: /var/run/dpdk/spdk_pid69417 00:26:35.772 Removing: /var/run/dpdk/spdk_pid69568 00:26:35.772 Removing: /var/run/dpdk/spdk_pid71090 00:26:35.772 Removing: /var/run/dpdk/spdk_pid71360 00:26:35.772 Removing: /var/run/dpdk/spdk_pid71506 00:26:35.772 Removing: /var/run/dpdk/spdk_pid73020 00:26:35.772 Removing: /var/run/dpdk/spdk_pid73285 00:26:35.772 Removing: /var/run/dpdk/spdk_pid73436 00:26:35.772 Removing: /var/run/dpdk/spdk_pid74957 00:26:35.772 Removing: /var/run/dpdk/spdk_pid75456 00:26:35.772 Removing: /var/run/dpdk/spdk_pid75607 00:26:35.772 Removing: /var/run/dpdk/spdk_pid75751 00:26:35.772 Removing: /var/run/dpdk/spdk_pid76202 00:26:35.772 Removing: /var/run/dpdk/spdk_pid76973 00:26:35.772 Removing: /var/run/dpdk/spdk_pid77378 00:26:35.772 Removing: /var/run/dpdk/spdk_pid78105 00:26:35.772 Removing: /var/run/dpdk/spdk_pid78585 00:26:35.772 Removing: /var/run/dpdk/spdk_pid79391 00:26:35.772 Removing: /var/run/dpdk/spdk_pid79834 00:26:35.772 Removing: /var/run/dpdk/spdk_pid81845 00:26:35.772 Removing: /var/run/dpdk/spdk_pid82299 00:26:35.772 Removing: /var/run/dpdk/spdk_pid82749 00:26:35.772 Removing: /var/run/dpdk/spdk_pid84879 00:26:35.772 Removing: /var/run/dpdk/spdk_pid85375 00:26:35.772 Removing: /var/run/dpdk/spdk_pid85880 00:26:35.772 Removing: /var/run/dpdk/spdk_pid87001 00:26:35.772 Removing: /var/run/dpdk/spdk_pid87335 00:26:35.772 Removing: /var/run/dpdk/spdk_pid88292 00:26:35.772 Removing: /var/run/dpdk/spdk_pid88620 00:26:35.772 Removing: /var/run/dpdk/spdk_pid89580 00:26:35.772 Removing: /var/run/dpdk/spdk_pid89910 00:26:35.772 Removing: /var/run/dpdk/spdk_pid90587 00:26:35.772 Removing: /var/run/dpdk/spdk_pid90867 00:26:35.772 Removing: /var/run/dpdk/spdk_pid90935 00:26:35.772 Removing: /var/run/dpdk/spdk_pid90977 00:26:35.772 Removing: /var/run/dpdk/spdk_pid91240 00:26:35.772 Removing: /var/run/dpdk/spdk_pid91418 00:26:35.772 Removing: /var/run/dpdk/spdk_pid91511 00:26:35.772 Removing: /var/run/dpdk/spdk_pid91614 00:26:35.772 Removing: /var/run/dpdk/spdk_pid91663 00:26:35.772 Removing: /var/run/dpdk/spdk_pid91694 00:26:35.772 Clean 00:26:36.031 13:21:22 -- common/autotest_common.sh@1453 -- # return 0 00:26:36.031 13:21:22 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:26:36.031 13:21:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.031 13:21:22 -- common/autotest_common.sh@10 -- # set +x 00:26:36.031 13:21:22 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:26:36.031 13:21:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.032 13:21:22 -- common/autotest_common.sh@10 -- # set +x 00:26:36.032 13:21:22 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:36.032 13:21:22 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:36.032 13:21:22 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:36.032 13:21:22 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:26:36.032 13:21:22 -- spdk/autotest.sh@398 -- # hostname 00:26:36.032 13:21:22 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:36.290 geninfo: WARNING: invalid characters removed from testname! 00:27:02.826 13:21:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:04.199 13:21:51 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:07.480 13:21:53 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:10.072 13:21:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:12.627 13:21:59 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:15.917 13:22:02 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:18.475 13:22:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:18.475 13:22:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:18.475 13:22:05 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:18.475 13:22:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:18.475 13:22:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:18.475 13:22:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:18.475 + [[ -n 5212 ]] 00:27:18.475 + sudo kill 5212 00:27:18.483 [Pipeline] } 00:27:18.499 [Pipeline] // timeout 00:27:18.504 [Pipeline] } 00:27:18.519 [Pipeline] // stage 00:27:18.524 [Pipeline] } 00:27:18.539 [Pipeline] // catchError 00:27:18.549 [Pipeline] stage 00:27:18.551 [Pipeline] { (Stop VM) 00:27:18.565 [Pipeline] sh 00:27:18.847 + vagrant halt 00:27:23.039 ==> default: Halting domain... 00:27:28.314 [Pipeline] sh 00:27:28.590 + vagrant destroy -f 00:27:32.782 ==> default: Removing domain... 00:27:32.794 [Pipeline] sh 00:27:33.076 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:27:33.087 [Pipeline] } 00:27:33.104 [Pipeline] // stage 00:27:33.109 [Pipeline] } 00:27:33.123 [Pipeline] // dir 00:27:33.129 [Pipeline] } 00:27:33.145 [Pipeline] // wrap 00:27:33.151 [Pipeline] } 00:27:33.164 [Pipeline] // catchError 00:27:33.175 [Pipeline] stage 00:27:33.178 [Pipeline] { (Epilogue) 00:27:33.191 [Pipeline] sh 00:27:33.474 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:40.052 [Pipeline] catchError 00:27:40.054 [Pipeline] { 00:27:40.068 [Pipeline] sh 00:27:40.353 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:40.353 Artifacts sizes are good 00:27:40.362 [Pipeline] } 00:27:40.376 [Pipeline] // catchError 00:27:40.388 [Pipeline] archiveArtifacts 00:27:40.395 Archiving artifacts 00:27:40.488 [Pipeline] cleanWs 00:27:40.499 [WS-CLEANUP] Deleting project workspace... 00:27:40.499 [WS-CLEANUP] Deferred wipeout is used... 00:27:40.506 [WS-CLEANUP] done 00:27:40.508 [Pipeline] } 00:27:40.522 [Pipeline] // stage 00:27:40.526 [Pipeline] } 00:27:40.537 [Pipeline] // node 00:27:40.543 [Pipeline] End of Pipeline 00:27:40.563 Finished: SUCCESS